SlideShare una empresa de Scribd logo
1 de 68
Descargar para leer sin conexión
Operating System
Prepared by,
Madhavi S. Avhankar,
Assistant Professor,
Indira College of Commerce and
Science, Pune
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Chapter 1 Introduction to Operating Systems [6]
1.1 Operating Systems Overview- system Overview and Functions of
operating systems
1.2 What does an OS do?
1.3 Operating system Operations
1.4 Operating system structure
1.5 Protection and security
1.6 Computing Environments- Traditional, mobile , distributed,
Client/server, peer to peer computing
1.7 Open source operating System
1.8 Booting
1.9 Operating System services
1.10 System calls Types of System calls and their working
Introduction:
Every general purpose computer consists of hardware, operating system, system
programs, application programs. The hardware consists of memory, CPU, ALU, I/O
devices, peripheral devices and storage devices. System program consists of
compilers, loaders , editors, OS etc. The application program consists of business
program, database program.
1.1 Operating Systems Overview- system Overview and Functions of
operating systems
Defining Operating System:
A computer system has many resources (hardware and software), which may be
require to complete a task. The commonly required resources are input/output
devices, memory, file storage space, CPU etc. The operating system acts as a
manager of the above resources and allocates them to specific programs and users,
whenever necessary to perform a particular task. Therefore operating system is the
resource manager i.e. it can manage the resource of a computer system internally.
The resources are processor, memory, files, and I/O devices. In simple terms, an
operating system is the interface between the user and the machine.
Functions of Operating System
1. It boots the computer
2. It performs basic computer tasks e.g. managing the various peripheral
devices e.g. mouse, keyboard
An operating system is a system software that provides an environment for the
user to interact with computer resources. Operating system is a program running
at all times on the computer and also called as a kernel. Operating system
provides various services to the user to use the computer system more efficiently
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
3. It provides a user interface, e.g. command line, graphical user interface
(GUI)
4. It handles system resources such as computer's memory and sharing of the
central processing unit(CPU) time by various applications or peripheral
devices.
5. It provides file management which refers to the way that the operating
system manipulates, stores, retrieves and saves data.
6. Error Handling is done by the operating system. It takes preventive
measures whenever required to avoid errors.
1.2 What does an OS do?
Computer system can be divided into four major components
1) Computer hardware
2) Operating System
3) Application Software/Program
4) User
Fig. Abstract view of computer system components
The most important program that runs on a computer is operating system. Every
computer must have an operating system to run other programs. The operating
system controls and coordinates the use of the hardware among the various system
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
programs and application program for a various users. It simply provides an
environment within which other program can do useful work.
The role of operating system from user view and the system view is as below:
1.2.1 User View
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows −
 If the user is using a personal computer, the operating system is largely
designed to make the interaction easy. Some attention is also paid to the
performance of the system, but there is no need for the operating system to
worry about resource utilization. This is because the personal computer uses
all the resources available and there is no sharing.
 If the user is using a system connected to a mainframe or a minicomputer,
the operating system is largely concerned with resource utilization. This is
because there may be multiple terminals connected to the mainframe and
the operating system makes sure that all the resources such as CPU,
memory, I/O devices etc. are divided uniformly between them.
 If the user is sitting on a workstation connected to other workstations
through networks, then the operating system needs to focus on both
individual usage of resources and sharing though the network. This happens
because the workstation exclusively uses its own resources but it also needs
to share files etc. with other workstations across the network.
 If the user is using a handheld computer such as a mobile, then the
operating system handles the usability of the device including a few remote
operations. The battery level of the device is also taken into account.
There are some devices that contain very less or no user view because there is no
interaction with the users. Examples are embedded computers in home devices,
automobiles etc.
1.2.2 System View
According to the computer system, the operating system is the bridge between
applications and hardware. It is most intimate with the hardware and is used to
control it as required.
The different types of system view for operating system can be explained as
follows:
 The system views the operating system as a resource allocator. There are
many resources such as CPU time, memory space, file storage space, I/O
devices etc. that are required by processes for execution. It is the duty of the
operating system to allocate these resources judiciously to the processes so
that the computer system can run as smoothly as possible.
 The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
there are no errors. It makes sure that the I/O devices work in a proper
manner without creating problems.
 Operating systems can also be viewed as a way to make using hardware
easier.
 Computers were required to easily solve user problems. However it is not
easy to work directly with the computer hardware. So, operating systems
were developed to easily communicate with the hardware.
 An operating system can also be considered as a program running at all
times in the background of a computer system (known as the kernel) and
handling all the application programs. This is the definition of the operating
system that is generally followed.
1.3 Operating-System Operations
As mentioned earlier, modern operating systems are interrupt driven. If there are
no processes to execute, no I/O devices to service, and no users to whom to
respond, an operating system will sit quietly, waiting for something to happen.
Events are almost always signalled by the occurrence of an interrupt or a trap.
A trap (or an exception)is a software-generated interrupt caused either by an error
(for example , division by zero or invalid memory access) or by a specific request
from a user program that an operating-system service be performed. The interrupt-
driven nature of an operating system defines that system's general structure. For
each type of interrupt, separate segments of code in the operating system
determine what action should be taken. An interrupt service routine is provided that
is responsible for dealing with the interrupt.
1.3.1 Dual-Mode Operation
In order to ensure the proper execution of the operating system, we must be able
to distinguish between the execution of operating-system code and user defined
code. The approach taken by most computer systems is to provide hardware
support that allows us to differentiate among various modes of execution. At the
very least, we need two separate modes of operation: user mode and kernel
mode (also called supervisor mode, system mode, or privileged mode). A bit,
called the mode bit, is added to the hardware of the computer to indicate the
current mode: kernel (0) or user (1). When the system is executing user
instructions, the system is in user mode.
When the user executed any system call to get the operating system services (OS
Code) the system switched the user mode to kernel mode
With the mode bit, one are able to distinguish between a task that is executed on
behalf of the operating system and one that is executed on behalf of the user.
 When the computer system is executing on behalf of a user application, the
system is in user mode.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
 However, when a user application requests a service from the operating
system (via a system call), it must transition from user to kernel
mode to fulfill the request.
 At system boot time, the hardware starts in kernel mode.
 The operating system is then loaded and starts user applications in user
mode.
 Whenever a trap or interrupt occurs, the hardware switches from user mode
to kernel mode (that is, changes the state of the mode bit to 0).
 Thus, whenever the operating system gains control of the computer, it is
in kernel mode.
 The system always switches to user mode (by setting the mode bit to 1)
before passing control to a user program.
Fig. Transition from user mode to kernel mode
The dual mode of operation provides us with the means for protecting the
operating system from errant users—and errant users from one another. This
protection is achieved by designating some of the special machine instructions as
privileged instructions. The hardware allows privileged instructions to be
executed only in kernel mode. If an attempt is made to execute a privileged
instruction in user mode, the hardware does not execute the instruction but rather
treats it as illegal and traps it to the operating system. The instruction to switch to
user mode is an example of a privileged instruction. Some other examples include
I/O control, timer management, and interrupt management.
System calls are treated as privileged instructions. When system call is executed,
it is treated as an interrupt. Control passes to the appropriate interrupt service
routine in operating system and mode bit is set to kernel mode.
User process executing Calls system call Return from system call
Kernel
Execute system call
Return
Mode bit = 1
Trap
Mode bit = 0
User Process
User mode
(mode bit =1)
Kernel mode
(mode bit = 0)
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
1.3.2 Timer
Operating system must maintains control over the CPU. System
must prevent a user program from getting stuck in an infinite loop or not
calling system services and never returning control to the operating system. To
accomplish this goal, the concept of timer can be used. A timer can be set to
interrupt the computer after a specified period. The period may be fixed (for
example, 1/60 second) or variable (for example, from 1 millisecond to 1 second).
A variable timer is generally implemented by a fixed-rate clock and a counter.
The operating system sets the counter. Every time the clock ticks, the counter is
decremented. When the counter reaches 0, an interrupt occurs. For instance, a 10-
bit counter with a 1-millisecond clock allows interrupts at intervals from 1
millisecond to 1,024 milliseconds, in steps of 1 millisecond. Before turning over
control to the user, the operating system ensures that the timer is set to interrupt.
If the timer interrupts, control transfers automatically to the operating system,
which may treat the interrupt as a fatal error or may give the program more time.
Clearly, instructions that modify the content of the timer are privileged. Thus, we
can use the timer to prevent a user program from running too long. A simple
technique is to initialize a counter with the amount of time that a program is
allowed to run. A program with a 7-minute time limit, for example, would have its
counter initialized to 420.
Every second, the timer interrupts and the counter is decremented by 1. As long
as the counter is positive, control is returned to the user program. When the
counter becomes negative, the operating system terminates the program for
exceeding the assigned time limit.
1.4 Operating system structure
For efficient performance and implementation an OS should be partitioned into
separate subsystems, each with carefully defined tasks, inputs, outputs, and
performance characteristics. These subsystems can then be arranged in various
architectural configurations:
Simple structure:
Such operating systems do not have well defined structure and are small, simple
and limited systems. The interfaces and levels of functionality are not well
separated. MS-DOS is an example of such operating system. In MS-DOS
application programs are able to access the basic I/O routines. These types of
operating system cause the entire system to crash if one of the user programs
fails.
Diagram of the structure of MS-DOS is shown below.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Advantages of Simple structure:
It delivers better application performance because of the few interfaces between
the application program and the hardware.
Easy for kernel developers to develop such an operating system.
Disadvantages of Simple structure:
The structure is very complicated as no clear boundaries exists between modules.
It does not enforce data hiding in the operating system.
Layered structure:
An OS can be broken into pieces and retain much more control on system. In this
structure the OS is broken into number of layers (levels). The bottom layer (layer
0) is the hardware and the topmost layer (layer N) is the user interface. These
layers are so designed that each layer uses the functions of the lower level layers
only. This simplifies the debugging process as if lower level layers are debugged
and an error occurs during debugging then the error must be on that layer only as
the lower level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be
modified and passed on which adds overhead to the system. Moreover careful
planning of the layers is necessary as a layer can use only lower level layers. UNIX
is an example of this structure.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Advantages of Layered structure:
Layering makes it easier to enhance the operating system as implementation of a
layer can be changed easily without affecting the other layers.
It is very easy to perform debugging and system verification.
Disadvantages of Layered structure:
In this structure the application performance is degraded as compared to simple
structure.
It requires careful planning for designing the layers as higher layers use the
functionalities of only the lower layers.
Micro-kernel:
This structure design s the operating system by removing all non-essential
components from the kernel and implementing them as system and user
programs. This result in a smaller kernel called the micro-kernel.
Advantages of this structure are that all new services need to be added to user
space and does not require the kernel to be modified. Thus it is more secure and
reliable as if a service fails then rest of the operating system remains untouched.
Mac OS is an example of this type of OS.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Advantages of Micro-kernel structure:
It makes the operating system portable to various platforms.
As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel structure:
Increased level of inter module communication degrades system performance.
Modular structure or approach:
It is considered as the best approach for an OS. It involves designing of a modular
kernel. The kernel has only set of core components and other services are added
as dynamically loadable modules to the kernel either during run time or boot time.
It resembles layered structure due to the fact that each kernel has defined and
protected interfaces but it is more flexible than the layered structure as a module
can call any other module.
For example Solaris OS is organized as shown in the figure.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
1.5 Protection and security
If a computer system has multiple users and allows the concurrent execution of
multiple processes, then access to data must be regulated. For that purpose,
mechanisms ensure that files, memory segments, CPU, and other resources can be
operated on by only those processes that have gained proper authorization from
the operating system. For example, memory-addressing hardware ensures that a
process can execute only within its own address space. The timer ensures that no
process can gain control of the CPU without eventually relinquishing control.
Device-control registers are not accessible to users, so the integrity of the various
peripheral devices is protected.
Protection, then, is any mechanism for controlling the access of processes
or users to the resources defined by a computer system.
Protection can improve reliability by detecting errors at the interfaces between
component subsystems. Furthermore, an unprotected resource cannot defend
against use (or misuse) by an unauthorized or incompetent user. A protection-
oriented system provides a means to distinguish between authorized and
unauthorized usage. System can have adequate protection but still be prone to
failure and allow inappropriate access. Consider a user whose authentication
information (username and password) is stolen. Her data could be copied or
deleted, even though file and memory protection are working. It is the job of
security to defend a system from external and internal attacks. Such attacks spread
across a huge range and include viruses and worms, denial-of-service attacks
(which use all of a system’s resources and so keep legitimate users out of the
system), identity theft, and theft of service (unauthorized use of a system).
Prevention of some of these attacks is considered an operating-system function on
some systems, while other systems leave it to policy or additional software. Due to
the alarming rise in security incidents, operating-system security features represent
a fast-growing area of research and implementation. Most operating systems
maintain a list of user names and associated user identifiers(userIDs). In
Windows , this is a security ID(SID). These numerical IDs are unique, one per
user. When a user logs into the system, the authentication stage determines the
appropriate user ID for the user. That user ID is associated with all of the user’s
processes and threads. In some circumstances, the owner of a file on a UNIX
system may be allowed to issue all operations on that file, whereas a selected set of
users maybe allowed only to read the file. To accomplish this, we need to define a
group name and the set of users belonging to that group. Group functionality can
be implemented as a system-wide list of group names and group identifiers. A user
can be in one or more groups, depending on operating-system design decisions.
The user may need access to a device that is restricted, for example. Operating
systems provide various methods to allow privilege escalation. On UNIX, for
instance, the set uid attribute on a program causes that program to run with the
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
user ID of the owner of the file, rather than the current user ’sID. The process runs
with this effective UID until it turns off the extra privileges or terminates.
1.6 Computing Environments
There are different ways of how computer systems are used in computing
environments. Some commonly used environments are as below:
1.6.1 Traditional Computing
Just a few years ago, computing environment consisted of PCs connected to a network,
with servers providing file and print services. Remote access was awkward, and
portability was achieved by use of laptop computers. Terminals attached to mainframes
were widespread at many companies. The current trend is toward providing more ways
to access these computing environments. Web technologies and increasing WAN
bandwidth are stretching the boundaries of traditional computing. Companies establish
portals, which provide Web accessibility to their internal servers. Network
computers (or thin clients)—which are essentially terminals that understand web-
based
computing—are used in place of traditional workstations where more security or easier
maintenance is desired. Mobile computers can synchronize with PCs to allow very
portable use of company information. Mobile computers can also connect to wireless
networks and cellular data networks to use the company’s Web portal (as well as the
myriad other Web resources). At home, most users once had a single computer with a
slow modem connection to the office, the Internet, or both. Today, network-connection
speeds once available only at great cost are relatively inexpensive in many places,
giving home users more access to more data. These fast data connections are allowing
home computers to serve up Web pages and to run networks that include printers,
client PCs, and servers. Many homes use firewalls to protect their networks from
security breaches. To optimize the use of the computing resources, multiple users
shared time on these systems. Time-sharing systems used a timer and scheduling
algorithms to cycle processes rapidly through the CPU, giving each user a share of the
resources.
1.6.2 Client-Server Computing
In this type of system centralized system architecture is used. Terminals are
connected to centralized systems of high speed and capacity servers. Server
systems satisfy the requests generated by client systems. This form of specialized
distributed system, called a client–server system.
Following fig. shows the general structure of client server operating system –
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Server systems can be broadly categorized as compute servers and file servers:
• The compute-server system provides an interface to which a client can send a
request to perform an action (for example, read data). In response, the server
executes the action and sends the results to the client. A server running a database
that responds to client requests for data is an example
of such a system.
• The file-server system provides a file-system interface where clients can create,
update, read, and delete files. An example of such a system is a web server that
delivers files to clients running web browsers.
1.6.3 Peer-to-Peer Computing
Another structure for a distributed system is the peer-to-peer (P2P) system model.
In this model, clients and servers are not distinguished from one another. Instead,
all nodes within the system are considered peers, and each may act as either a To
participate in a peer-to-peer system, a node must first join the network of peers.
Once a node has joined the network, it can begin providing services to—and
requesting services from—other nodes in the network. Determining what services
are available is accomplished in one of two general ways:
• When a node joins a network, it registers its service with a centralized lookup
service on the network. Any node desiring a specific service first contacts this
centralized lookup service to determine which node provides the service. The
remainder of the communication takes place between the client and the service
provider.
• An alternative scheme uses no centralized lookup service. Instead, a peer acting
as a client must discover what node provides a desired service by broadcasting a
request for the service to all other nodes in the network. The node (or nodes)
providing that service responds to the peer making the request. To support this
approach, a discovery protocol must be provided that allows peers to discover
services provided by other peers in the network.
Following fig. shows the general structure of peer-to-peer computing.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
1.6.4 Distributed Systems
A distributed system is a collection of physically separate, possibly heterogeneous,
computer systems that are networked to provide users with access to the various
resources that the system maintains. Access to a shared resource increases
computation speed, functionality, data availability, and reliability. Some
operating systems generalize network access as a form of file access. Others make
users specifically invoke network functions. Generally, systems contain a mix of the
two modes — for example FTP and NFS. The protocols that create a distributed
system can greatly affect that system’s utility and popularity. A network, in the
simplest terms, is a communication path between two or more systems. Distributed
systems depend on networking for their functionality. Networks vary by the
protocols used, the distances between nodes, and the transport media. TCP/IP is
the most common network protocol, and it provides the fundamental architecture of
the Internet.
Reasons for distributed operating system/Advantages of distributed
operating system:
1. Resource sharing:
If the number of different sites are connected together then the user at one site
may be able to access the resources available at other site.
2. Computation speedup: The particular computation can be partitioned into
interdependent sub-computations which can run concurrently.
3. Reliability: In distributed system, if one site fails then the remaining site can
continue operating and hence the reliability improved.
4. Communication: When various sites are connected together, user can exchange
information with the help of communication network.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Fig. General structure of distributed system
A network operating system is an operating system that provides features such as
file sharing across the network, along with a communication scheme that allows
different processes on different computers to exchange messages. A computer
running a network operating system acts autonomously from all other computers
on the network, although it is aware of the network and is able to communicate
with other networked computers. A distributed operating system provides a less
autonomous environment. The different computers communicate closely enough to
provide the illusion that only a single operating system controls the network.
Extras - Differences
1. Multiprocessor and clustered system
Sr.
No.
Multiprocessor System Clustered System
1. Single system with more than one
processor
Multiple systems joined to act as
one
2. Multiple CPUs share memory, bus and
other peripheral devices
Clustered computers have shared
storage and are connected
together using LAN and other
faster network.
3. Provides low availability as compared
to clustered system. If any computer
Provides high availability of
services i.e if one or more
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
fails down then all processors available
in that computer will not able to
continue the work.
computer in the cluster fails down,
the system continues to work.
4. Cost is less compared to clustered
system
Cost is higher than multiprocessor
system
5. Uses local storage Uses SAN (Storage Area Network)
– Pool of storage.
2. Asymmetric and Symmetric multiprocessing
Sr.
No.
Asymmetric Multiprocessing Symmetric Multiprocessing
1.
In asymmetric multiprocessing, the
processors are not treated equally.
In symmetric multiprocessing, all the
processors are treated equally.
2.
Tasks of the operating system are done
by master processor.
Tasks of the operating system are
done individual processor
3.
No Communication between Processors
as they are controlled by the master
processor.
All processors communicate with
another processor by a shared
memory.
4.
In asymmetric multiprocessing, process
are master-slave.
In symmetric multiprocessing, the
process is taken from the ready
queue.
5.
Asymmetric multiprocessing systems
are cheaper.
Symmetric multiprocessing systems
are costlier.
6.
Asymmetric multiprocessing systems
are easier to design
Symmetric multiprocessing systems
are complex to design
3. Client -Server and Peer to peer computing
Sr.
No.
Client-Server Computing Peer-to-Peer Computing
1.
In Client-Server Network, Clients and
server are differentiated, Specific server
and clients are present.
In Peer-to-Peer Network, Clients
and server are not differentiated.
2.
Client-Server Network focuses on
information sharing.
While Peer-to-Peer Network focuses
on connectivity.
3.
In Client-Server Network, Centralized
server is used to store the data.
While in Peer-to-Peer Network, Each
peer has its own data.
4.
In Client-Server Network, Server respond
the services which is request by Client.
While in Peer-to-Peer Network, Each
and every node can do both request
and respond for the services.
5. Client-Server Network are costlier than While Peer-to-Peer Network are less
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Peer-to-Peer Network. costlier than Client-Server Network.
6.
Client-Server Network are more stable
than Peer-to-Peer Network.
While Peer-to-Peer Network are less
stable if number of peer is increase.
7.
Client-Server Network is used for both
small and large networks.
While Peer-to-Peer Network is
generally suited for small networks
with fewer than 10 computers.
client or a server, depending on whether it is requesting or providing a service.
Peer-to-peer systems offer an advantage over traditional
client-server systems. In a client-server system, the server is a bottleneck; but in a
peer-to-peer system, services can be provided by several nodes distributed
throughout the network.
1.7 Open source operating System
The term "open source" refers to computer software or applications where the owners
or copyright holders enable the users or third parties to use, see, and edit the product's
source code. The source code of an open-source OS is publicly visible and editable. The
usually operating systems such as Apple's iOS, Microsoft's Windows, and Apple's Mac
OS are closed operating systems. Open-Source Software is licensed in such a way that it
is permissible to produce as many copies as you want and to use them wherever you
like. It generally uses fewer resources than its commercial counterpart because it lacks
any code for licensing, promoting other products, authentication, attaching
advertisements, etc.
The open-source operating system allows the use of code that is freely distributed and
available to anyone and for commercial purposes. Being an open-source application or
program, the program source code of an open-source OS is available. The user may
modify or change those codes and develop new applications according to the user
requirement. Some basic examples of the open-source operating systems are Linux,
Open Solaris, Free RTOS, Open BDS, Free BSD, Minix, etc.
In 1997, the first Open-Source software was released. Despite the industry, there are
now Open-Source alternatives for every Software program. Thanks to technological
developments and innovations, many Open-Source Operating Systems have been
developed since the dawn of the 21st century.
1.8 Booting
After an operating system is generated, it must be available for the use by the
hardware. But how the hardware knows where the kernel is, or how to load that
kernel? The procedure of starting a computer by loading the kernel is known
as Booting the system. Hence it needs a special program, stored in ROM to do this
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
job known as the Bootstrap loader. Example: BIOS (boot input output system).
A modern PC BIOS (Basic Input/Output System) supports booting from various
devices. Typically, the BIOS will allow the user to configure a boot order. If the boot
order is set to: CD Drive, Hard Disk Drive, Network
Then the BIOS will try to boot from the CD drive first, and if that fails then it will try
to boot from the hard disk drive, and if that fails then it will try to boot from the
network, and if that fails then it won’t boot at all.
Booting is a startup sequence that starts the operating system of a computer when
it is turned on. A boot sequence is the initial set of operations that the computer
performs when it is switched on. Every computer has a boot sequence. Bootstrap
loader locates the kernel, loads it into main memory and starts its execution. In
some systems, a simple bootstrap loader fetches a more complex boot program
from disk, which in turn loads the kernel.
Dual Booting:
When two operating system are installed on the computer system then it is called
dual booting. In fact multiple operating systems can be installed on such a system.
But how system knows which operating system is to boot? A boot loader that
understands multiple file systems and multiple operating system can occupy the
boot space. Once loaded, it can boot one of the operating systems available on the
disk. The disk can have multiple partitions, each containing a different type of
operating system. When a computer system turn on, a boot manager program
displays a menu, allowing user to choose the operating system to use.
1.9 Operating System services
An operating system provides the environment within which programs are
executed. An operating system provides an environment for the execution of
programs. It provides certain services to programs and to the users of those
programs. The specific services provided, differ from one operating system to
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
another, but we can identify common classes. These operating system services are
provided for the convenience of the programmer, to make the programming task
easier.
Following Figure shows one view of the various operating-system services and how
they interrelate.
Operating System Services: One set of operating system services provides
functions that are helpful to the user.
 User interface: Almost all operating systems have a user interface ( UI ). This
interface can take several forms. One is a command-line interface ( CLI ), which
uses text commands and a method for entering them (keyboard for typing in
commands in a specific format with specific options). Another is a batch interface,
in which commands and directives to control those commands are entered into
files, and those files are executed. Most commonly, a graphical user interface ( GUI
) is used. Here, the interface is a window system with a pointing device to direct
I/O , choose from menus, and make selections and a keyboard to enter text. Some
systems provide two or all three of these variations.
 Program execution: The system must be able to load a program into memory
and to run that program. The program must be able to end its execution, either
normally or abnormally (indicating error).  I/O operations: A running program may
require I/O , which may involve a file or an I/O device. For specific devices, special
functions may be desired (such as recording to a CD or DVD drive or blanking a
display screen). For efficiency and protection, users usually cannot control I/O
devices directly. Therefore, the operating system must provide a means to do I/O .
 File-system manipulation: Many programs need to read and write files and
directories. They also need to create and delete them by name, search for a given
file, and list file information. Some operating systems include permissions
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
management to allow or deny access to files or directories based on file ownership.
Many operating systems provide a variety of file systems, sometimes to allow
personal choice and sometimes to provide specific features or performance
characteristics.
 Communications: There are many circumstances in which one process needs to
exchange information with another process. Such communication may occur
between processes that are executing on the same computer or between processes
that are executing on different computer systems tied together by a computer
network. Communications may be implemented via shared memory, in which two
or more processes read and write to a shared section of memory, or message
passing, in which packets of information in predefined formats are moved between
processes by the operating system.
 Error detection: The operating system needs to be detecting and correcting
errors constantly. Errors may occur in the CPU and memory hardware (such as a
memory error or a power failure), in I/O devices (such as a parity error on disk, a
connection failure on a network, or lack of paper in the printer), and in the user
program (such as an arithmetic overflow, an attempt to access an illegal memory
location, or a too great use of CPU time). For each type of error, the operating
system should take the appropriate action to ensure correct and consistent
computing. Sometimes, it has no choice but to halt the system. At other times, it
might terminate an error-causing process or return an error code to a process for
the process to detect and possibly correct.
Another set of operating system functions exists not for helping the user but rather
for ensuring the efficient operation of the system itself. Systems with multiple
users can gain efficiency by sharing the computer resources among the users.
 Resource allocation: When there are multiple users or multiple jobs running at
the same time, resources must be allocated to each of them. The operating
system manages many different types of resources. Some (such as CPU cycles,
main memory, and file storage) may have special allocation code, whereas
others (such as I/O devices) may have much more general request and release
code. For instance, in determining how best to use the CPU , operating systems
have CPU - scheduling routines that take into account the speed of the CPU , the
jobs that must be executed, the number of registers available, and other
factors. There may also be routines to allocate printers, USB storage drives, and
other peripheral devices.
 Accounting: We want to keep track of which users use how much and what
kinds of computer resources. This record keeping may be used for accounting
(so that users can be billed) or simply for accumulating usage statistics. Usage
statistics may be a valuable tool for researchers who wish to reconfigure the
system to improve computing services.
 Protection and security: The owners of information stored in a multiuser or
networked computer system may want to control use of that information. When
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
several separate processes execute concurrently, it should not be possible for
one process to interfere with the others or with the operating system itself.
Protection involves ensuring that all access to system resources is controlled.
Security of the system from outsiders is also important. Such security starts
with requiring each user to authenticate himself or herself to the system, usually
by means of a password, to gain access to system resources.
1.10 System calls Types of System calls and their working
System calls provide an interface to the services made available by an operating
system. These calls are generally available as routines written in C and C+.
1.10.1 How system calls are used:
Consider an example of writing a simple program to read data from one file and
copy them to another file.
-The first input that the program will need is the names of the two files: the input
file and the output file. These names can be specified in many ways, depending on
the operating system design. One approach is for the program to ask the user for
the names. In an interactive system, this approach will require a sequence of
system calls, first to write a prompting message on the screen and then to read
from the keyboard the characters that define the two files. On mouse-based and
icon-based systems, a menu of file names is usually displayed in a window. The
user can then use the mouse to select the source name, and a window can be
opened for the destination name to be specified. This sequence requires many I/O
system calls.
-Once the two file names have been obtained, the program must open the input file
and create the output file. Each of these operations requires another system call.
-When both files are set up, we enter a loop that reads from the input file (a
system call) and writes to the output file (another system call).
-Finally, after the entire file is copied, the program may close both files (another
system call), write a message to the console or window (more system calls), and
finally terminate normally (the final system call)
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Fig. Example of how system calls are used
Many operating systems provides Application Programming Interface(API) which
specifies a set of system calls or functions that are available to application
programmers. The API calls are directly used by application programmers to
develop their applications which hide details of system calls from programmers.
Three of the most common API s available to application programmers are:
1. The Windows API for Windows systems.
2. The POSIX API for POSIX -based systems (which include virtually all versions of
UNIX , Linux, and Mac OS X ).
3. The Java API for programs that run on the Java virtual machine
1.10.2 An Example of standard API :
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
1.10.3 The run-time support system(System call Interface):
The run time support system (a set of functions built into libraries included with a
compiler) provides a system call interface that serves as the link to system calls
made available by the operating system. The system-call interface intercepts
function calls in the API and invokes the necessary system calls within the
operating system. Typically, a number is associated with each system call, and the
system-call interface maintains a table indexed according to these numbers. The
system call interface then invokes the intended system call in the operating system
kernel and returns the status of the system call and any return values. The caller
need know nothing about how the system call is implemented or what it does
during execution. Rather, the caller need only obey the API and understand what
the operating system will do as a result of the execution of that system call. Thus,
most of the details of the operating-system interface are hidden from the
programmer by the API and are managed by the run-time support library. Following
figure shows the relationship between an API , the system-call interface, and the
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
operating system which illustrates how the operating system handles a user
application invoking the open() system call.
1.10.4 Passing parameters to system call:
Three general methods are used to pass parameters to the operating system.
1. Pass the parameters in registers. Parameters are accessed much faster in
registers, but in some cases meters in registers. In some cases, there may be more
parameters than registers.
2. In these cases, the parameters are generally stored in a block, or table, in
memory, and the address of the block is passed as a parameter in a register. This
is the approach taken by Linux and Solaris.
3. Parameters also can be placed, or pushed, onto the stack by the program and
popped off the stack by the operating system. Some operating systems prefer the
block or stack method because those approaches do not limit the number or length
of parameters being passed.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Fig. Passing of parameters as a table.
1.10.5 Types of System Calls
System calls can be grouped roughly into six major categories:
1. Process control
2. File manipulation
3. Device manipulation
4. Information maintenance
5. Communications
6. Protection.
Process Control
A running program needs to be able to halt its execution either normally ( end() )
or abnormally ( abort() ). If a system call is made to terminate the currently
running program abnormally, or if the program runs into a problem and causes an
error trap, a dump of memory is sometimes taken and an error message
generated. The dump is written to disk and may be examined by a debugger—a
system program designed to aid the programmer in finding and correcting errors,
or bugs—to determine the cause of the problem. Under either normal or abnormal
circumstances, the operating system must transfer control to the invoking
command interpreter. The command interpreter then reads the next command. The
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
command interpreter then simply continues with the next command. The command
interpreter loads the program into memory (load system call) as directed by the
user command such as text command or mouse click. Once the program is loaded,
operating system starts its execution (execute system call), every program is
executed by operating system after creating a separate process for it (create
process). We may want to terminate the executing process normally or abnormally
by invoking terminate process system call. In order to control the execution
processes we require to determine and reset attributes such as priority, maximum
allowable execution time, maximum memory allocation and so on (get process
attributes and set process attributes). Some time we may need to wait for
processes to finish their execution or to wait for certain amount of time (wait
time) or to wait for specific event to occur (wait event). Waiting processes get
signal when an event occurs so that they wake up and continue the execution
(signal event).
File Management
In most cases user/programmer first need to be able to create() and delete()
files. Either system call requires the name of the file and perhaps some of the file’s
attributes. Once the file is created, need to open() it and to use it. User may also
read() , write() , or reposition() (rewind or skip to the end of the file, for
example). Finally, need to close() the file, indicating that we are no longer using it.
We may need these same sets of operations for directories if we have a directory
structure for organizing files in the file system. In addition, for either files or
directories, we need to be able to determine the values of various attributes and
perhaps to reset them if necessary. File attributes include the file name, file type,
protection codes, accounting information, and so on. At least two system calls, get
file attributes() and set file attributes() , are required for this function. Some
operating systems provide many more calls, such as calls for file move() and
copy() . Others might provide an API that performs those operations using code
and other system calls, and others might provide system programs to perform
those tasks. If the system programs are callable by other programs, then each can
be considered an API by other system programs.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Device Management
A process may need several resources to execute —main memory, disk drives,
access to files, and so on. If the resources are available, they can be granted, and
control can be returned to the user process. Otherwise, the process will have to
wait until sufficient resources are available. The various resources controlled by the
operating system can be thought of as devices. Some of these devices are physical
devices (for example, disk drives), while others can be thought of as abstract or
virtual devices (for example, files). A system with multiple users may require us to
first request() a device, to ensure exclusive use of it. After we are finished with
the device, we release() it. These functions are similar to the open() and close()
system calls for files. Other operating systems allow unmanaged access to devices.
Once the device has been requested (and allocated to us), we can read() , write()
, and (possibly) reposition() the device, just as we can with files. Many
operating system e.g UNIX or Linux treat every device as a file. All operations to
perform device management are almost similar to the file management. The user
interface can also make files and devices appear to be similar even though the
underlying system calls are dissimilar.
Information Maintenance
Many system calls exist simply for the purpose of transferring information between
the user program and the operating system. For example, most systems have a
system call to return the current time() and date() . Other system calls may
return information about the system, such as the number of current users, the
version number of the operating system, the amount of free memory or disk
space, and so on. Another set of system calls is helpful in debugging a program.
Many systems provide system calls to dump() memory. This provision is useful for
debugging. A program trace lists each system call as it is executed. Even
microprocessors provide a CPU mode known as single step, in which a trap is
executed by the CPU after every instruction. The trap is usually caught by a
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
debugger. Many operating systems provide a time profile of a program to indicate
the amount of time that the program executes at a particular location or set of
locations. A time profile requires either a tracing facility or regular timer interrupts.
At every occurrence of the timer interrupt, the value of the program counter is
recorded. With sufficiently frequent timer interrupts, a statistical picture of the time
spent on various parts of the program can be obtained. In addition, the operating
system keeps information about all its processes, and system calls are used to
access this information. Generally, calls are also used to reset the process
information ( get process attributes() and set process attributes() ).
Communication
There are two common models of interprocess communication: the message-
passing model and the shared-memory model.
In the message-passing model, the communicating processes exchange
messages with one another to transfer information. Messages can be exchanged
between the processes either directly or indirectly through a common mailbox.
Before communication can take place, a connection must be opened. The name of
the other communicator must be known, be it another process on the same system
or a process on another computer connected by a communications network. Each
computer in a network has a host name by which it is commonly known. A host also
has a network identifier, such as an IP address. Similarly, each process has a
process name, and this name is translated into an identifier by which the operating
system can refer to the process. The get_hostid() and get_processid() system
calls do this translation. The identifiers are then passed to the general- purpose
open() and close() calls provided by the file system or to specific
open_connection() and close_connection() system calls, depending on the
system’s model of communication. The recipient process usually must give its
permission for communication to take place with an accept_connection() call.
Most processes that will be receiving connections are special-purpose daemons,
which are system programs provided for that purpose. They execute a
wait_for_connection() call and are awakened when a connection is made. The
source of the communication, known as the client, and the receiving daemon,
known as a server, then exchange messages by using read_message() and
write_message() system calls. The close_connection() call terminates the
communication.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
In the shared-memory model, processes use shared memory create() and
shared memory attach() system calls to create and gain access to regions of
memory owned by other processes. Recall that, normally, the operating system
tries to prevent one process from accessing another process’s memory. Shared
memory requires that two or more processes agree to remove this restriction. They
can then exchange information by reading and writing data in the shared areas. The
form of the data is determined by the processes and is not under the operating
system’s control. The processes are also responsible for ensuring that they are not
writing to the same location simultaneously. Message passing is useful for
exchanging smaller amounts of data, because no conflicts need be avoided. It is
also easier to implement than is shared memory for intercomputer communication.
Shared memory allows maximum speed and convenience of
communication, since it can be done at memory transfer speeds when it
takes place within a computer. Problems exist, however, in the areas of
protection and synchronization between the processes sharing memory
Protection
Protection provides a mechanism for controlling access to the resources provided by
a computer system. Historically, protection was a concern only on
multiprogrammed computer systems with several users. However, with the advent
of networking and the Internet, all computer systems, from servers to mobile
handheld devices, must be concerned with protection. Typically, system calls
providing protection include set permission() and get permission() , which
manipulate the permission settings of resources such as files and disks. The allow
user() and deny user() system calls specify whether particular users can—or
cannot—be allowed access to certain resources.
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Operating System Chapter 1 Introduction to Operating Systems
Indira College of Commerce and Science Madhavi Avhankar
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
2. Process Management
2.1 Process Concept – The processes, Process states, Process control block.
2.2 Process Scheduling – Scheduling queues, Schedulers, context switch
2.3 Operations on Process – Process creation with program using fork(),
Process termination
2.4 Thread Scheduling- Threads, benefits, Multithreading Models, Thread
Libraries
2.1 Process Concepts
2.1.1 Process
A process is a program in execution, when loaded into main memory.
The process is divided into 4 sections, i.e process consist of 4 sections in main memory:
1. Code Section: This section contains instructions of the process.
2. Data Section: This section contains global and static variable.
3. Heap Section: This section contains the memory allocated during the execution of the
program(Dynamic Memory allocation).
4. Stack Section: This section contains function parameters, local variables and return
address of the function.
2.1.2 Process states:
During the lifetime of the process can be in one the 5 states:
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
1. New: The process is being created.
2. Running: The process is been executing.
3. Waiting: The process is waiting for some event to occurs.
4. Ready: The process is waiting to be assigned to a processor.
5. Terminate: The process has finished execution.
The important thing is that only one process can be running in any processor at any time. But
many processes may be in ready and waiting states. The ready process is loaded into a “Ready
Queue“.
Explanation of Process States:
New -> Ready: The operating system creates a process and prepares the process to be
executed, then the operating system moved the process into “Ready Queue“.
Ready -> Running: When it is time to select a process to run. The operating system selects one
of the jobs from the ready queue and moves the process from the ready state to the running
state.
Running -> Terminated: When the execution of a process has been completed, then the OS
terminates that process from the running state.
Running -> Ready: When the time slot of the processor expired, then the operating system
shifted the running process to the ready state.
Ready -> Running: When it is time to select a process to run. The OS selects one of the jobs
from the ready queue and moves again the process from the ready state to the running state.
Running -> Waiting: A process is put into the waiting state. If the process needs an event to
occur or an I/O device. The operating system doesn’t provide the I/O device then the process is
moved to the waiting state by the operating state.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
Waiting -> Ready: The process in the blocked state is moved to the ready state when the event
for which it has been waiting occurs.
2.1.3 Process Control Block(PCB)
Every process is represented in the operating system by a process control block, which is also
called a task control block.
Here, are important components of PCB
 Process state: A process can be new, ready, running, waiting, etc.
 Program counter: The program counter lets you know the address of the next
instruction, which should be executed for that process.
 CPU registers: This component includes accumulators, index and general-purpose
registers, and information of condition code.
 CPU scheduling information: This component includes a process priority, pointers for
scheduling queues, and various other scheduling parameters.
 Accounting and business information: It includes the amount of CPU and time utilities
like real time used, job or process numbers, etc.
 Memory-management information: This information includes the value of the base and
limit registers, the page, or segment tables. This depends on the memory system, which
is used by the operating system.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
 I/O status information: This block includes a list of open files, the list of I/O devices that
are allocated to the process, etc.
2.2 Process Scheduling
Process scheduling is an important part of multiprogramming operating systems. It is
the process of removing the running task from the processor(CPU) and selecting another
task for processing. It schedules a process into different states like ready, waiting, and
running.
2.2.1 Scheduling Queue:
Job Queue:
Every process which is created and existed in the system is put into the job queue. i.e Job
queue contains PCBs of all the processes in the system. As the process enter the system, it is
put into a job queue.
Ready Queue:
The processes in main memory which are ready and waiting for the allocation of CPU, their
PCBs are stored into ready queue.
Device Queue: List of process which are waiting for a particular I/O device, their PCB’s
are stored into device queue. Every device has its own device queue.
Ready
Queue
Queue
header
Hea
d
Tail
Process state
CPU
Register
.
.
.
.
P1’s PCB
Process state
CPU
Register
.
.
.
.
Pn’s PCB
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
A new
process is initially put in the Ready queue. It waits in the ready queue until it is selected for
execution(or dispatched). Once the process is assigned to the CPU and is executing, one of the
following several events can occur:
 The process could issue an I/O request, and then be placed in the I/O queue.
 The process could create a new subprocess and wait for its termination.
 The process could be removed forcibly from the CPU, as a result of an interrupt, and be
put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the ready state,
and is then put back in the ready queue. A process continues this cycle until it terminates, at
which time it is removed from all queues and has its PCB and resources deallocated
Two State Process Model
Two-state process models are:
 Running
 Not Running
Running
In the Operating system, whenever a new process is built, it is entered into the system, which
should be running.
Not Running
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
The process that are not running are kept in a queue, which is waiting for their turn to execute.
Each entry in the queue is a point to a specific process.
2.2.2 Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their
main task is to select the jobs to be submitted into the system and to decide which process to
run. Schedulers are of three types −
 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler
Long Term Scheduler
It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads them
into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O
bound and processor bound. It also controls the degree of multiprogramming. If the degree of
multiprogramming is stable, then the average rate of process creation must be equal to the
average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing
operating systems have no long term scheduler. When a process changes the state from new
to ready, then there is use of long-term scheduler.
Short Term Scheduler
It is also called as CPU scheduler. Its main objective is to increase system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling
the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
secondary storage. This process is called swapping, and the process is said to be swapped out
or rolled out. Swapping may be necessary to improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
1 It is a job scheduler It is a CPU scheduler It is a process swapping
scheduler.
2 Speed is lesser than short
term scheduler
Speed is fastest among
other two
Speed is in between both
short and long term
scheduler.
3 It controls the degree of
multiprogramming
It provides lesser
control over degree of
multiprogramming
It reduces the degree of
multiprogramming.
4 It is almost absent or
minimal in time sharing
system
It is also minimal in time
sharing system
It is a part of Time sharing
systems.
5 It selects processes from
pool and loads them into
memory for execution
It selects those
processes which are
ready to execute
It can re-introduce the
process into memory and
execution can be
continued.
2.2.3 Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU in
Process Control block so that a process execution can be resumed from the same point at a
later time. Using this technique, a context switcher enables multiple processes to share a
single CPU. Context switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to execute another, the
state from the current running process is stored into the process control block. After this, the
state for the process to run next is loaded from its own PCB and used to set the PC, registers,
etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must be saved
and restored. To avoid the amount of context switching time, some hardware systems employ
two or more sets of processor registers. When the process is switched, the following
information is stored for later use.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
 Program Counter
 Scheduling information
 Base and limit register value
 Currently used register
 Changed State
 I/O State information
 Accounting information
Context Switching Steps
The steps involved in context switching are as follows −
 Save the context of the process that is currently running on the CPU. Update the process
control block and other important fields.
 Move the process control block of the above process into the relevant queue such as
the ready queue, I/O queue etc.
 Select a new process for execution.
 Update the process control block of the selected process. This includes updating the
process state to running.
 Update the memory management data structures as required.
 Restore the context of the process that was previously running when it is loaded again
on the processor. This is done by loading the previous values of the process control
block and registers.
2.3 Operations on Processes:
There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination. These are given in
detail as follows −
2.3.1 Process Creation
Processes need to be created in the system for different operations. This can be done by the
following events −
 User request for process creation
 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
A process may be created by another process using fork(). The creating process is called the
parent process and the created process is the child process. A child process can have only one
parent but a parent process may have many children. Both the parent and child processes have
the same memory image, open files, and environment strings. However, they have distinct
address spaces.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
A diagram that demonstrates process creation using fork() is as follows −
Let us take the following example:
int main()
{
printf(“Before Forkingn”);
fork();
printf(“After Forkingn”);
return 0;
}
If the call to fork() is executed successfully, Linux will
• Make two identical copies of address spaces, one for the parent and the other for the child.
• Both processes will start their execution at the next statement following the fork() call.
Output of above program:
Before Forking
After Forking
After Forking
Here printf() statement after fork() system call executed by parent as well as child process. Both
processes start their execution right after the system call fork(). Since both processes have
identical but separate address spaces, those variables initialized before the fork() call have the
same values in both address spaces. Since every process has its own address space, any
modifications will be independent of the others. In other words, if the parent changes the value
of its variable, the modification will only affect the variable in the parent process's address
space. Other address spaces created by fork() calls will not be affected even though they have
identical variable names.
2.3.2 Process Preemption
An interrupt mechanism is used in preemption that suspends the process executing currently
and the next process to execute is determined by the short-term scheduler. Preemption makes
sure that all processes get some CPU time for execution.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
A diagram that demonstrates process preemption is as follows −
2.3.3 Process Blocking
The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O
events are executed in the main memory and don't require the processor. After the event is
complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
2.3.4 Process Termination
After the process has completed the execution of its last instruction, it is terminated. The
operating system terminates the process using exit() system call. When process terminate, it
returns data to its parent process. The resources like memory, files and I/O devices are de-
allocated by operating system. held by a process are released after it is terminated. A child
process may be terminated if its parent process requests for its termination.
Process terminates usually for following reasons.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
1. Due to Normal exit: e.g when compiler has compiled the program and task has been
finised.
2. Error Exit: e.g user is trying to execute file which not exists.
3. Fatal Error: (Error caused by bug) ee.g referring nonexistent memory or dividing by zero
error etc.
4. Killed by another process: The process may kill another process , e.g parent process
may kill child process(kill() system call).
2.4 Thread Scheduling
2.4.1 What is Thread?
A thread is a flow of execution through the process code, with its own program counter that
keeps track of which instruction to execute next, system registers which hold its current
working variables, and a stack which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and
open files. When one thread alters a code segment memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to improve application
performance through parallelism.
Each thread belongs to exactly one process and no thread can exist outside a process. Each
thread represents a separate flow of control. Threads have been successfully used in
implementing network servers and web server. They also provide a suitable foundation for
parallel execution of applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
Difference between Process and Thread
S.N. Process Thread
1 Process is heavy weight or resource
intensive.
Thread is light weight, taking lesser
resources than a process.
2 Process switching needs interaction
with operating system.
Thread switching does not need to
interact with operating system.
3 In multiple processing environments,
each process executes the same code
but has its own memory and file
resources.
All threads can share same set of open
files, child processes.
4 If one process is blocked, then no
other process can execute until the
first process is unblocked.
While one thread is blocked and waiting,
a second thread in the same task can
run.
5 Multiple processes without using
threads use more resources.
Multiple threaded processes use fewer
resources.
6 In multiple processes each process
operates independently of the others.
One thread can read, write or change
another thread's data.
2.4.2 Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and
each thread is treated as a job, the number of jobs done in the unit time increases. That
is why the throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in
one process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the
CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share
the same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
o Resource sharing: Resources can be shared between all threads within a process, such
as code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.
2.4.3 Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
The threads implemented at the user level are known as user threads. In user level thread,
thread management is done by the application. In this case, kernel is not aware of the
existence of threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for saving
and restoring thread contexts.
As kernel is unaware of user level threads, all thread creation, scheduling etc. are done at user
space without the need of kernel intervention. Therefore user level threads are generally fast
to create and manage.
User threads libraries include POSIX PThread, Mach C-thredas and Solaries UI – threads. The
application starts with a single thread
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code
in the application area. Kernel threads are supported directly by the operating system. Any
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
application can be programmed to be multithreaded. All of the threads within an application
are supported within a single process.
The Kernel maintains context information for the process as a whole and for individuals
threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel
performs thread creation, scheduling and management in Kernel space. Kernel threads are
generally slower to create and manage than the user threads.
As Kernel is managing the threads, if a thread performs a clocking of system call, the kernel
can schedule another thread in the application for execution.
In multiprocessor environment, the kernel can schedule threads on different processors.
Most contemporary operating system- e.g Windows NT, 2000, Solaries 2, BEOS and Tru64 UNIX
support kernel threads.
Advantages of Kernel-level threads
1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
2.4.4 Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach. In a combined system, multiple threads
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
within the same application can run in parallel on multiple processors and a blocking system
call need not block the entire process. Multithreading models are three types
 One to one relationship
 Many to one relationship.
 Many to many relationship.
One to one Model:
In one-to-one thread model, one to one relationship between a user-level thread to a kernel-
level thread. The one-to-one model maps each user thread to a kernel thread.
The following fig shows one-to-one model.
e.g OS/2, Windows NT and windows 2000 use one-to-one relationship model.
Advantages:
This model provides more concurrency than the many to one model.
It supports multiple threads to execute in parallel on microprocessors.
Disadvantages:
Each user thread, the kernel thread is required.
Creating a kernel thread is overhead.
It reduces the performance of the system.
Many-to-one Model:
The many to one model maps many user level threads to one kernel thread. Thread
management is done in user space, so it is efficient, but the entire process will block if a thread
makes a blocking system call. Only one thread can access the kernel at a time. Multiple threads
are unable to run in parallel on multiprocessors. Green threads a thread library available or
solaries 2, uses many-to-one thread model.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
Advantages:
One kernel thread controls multiple user threads.
It is efficient because thread management is done by thread library.
Used in language
Many-to-many Model:
In this type of model, there are several user-level threads and several kernel-level threads. The
number of kernel threads created depends upon a particular application. The developer can
create as many threads at both levels but may not be the same. The many to many model is a
compromise between the other two models. In this model, if any thread makes a blocking
system call, the kernel can schedule another thread for execution. Also, with the introduction of
multiple threads, complexity is not present as in the previous models. Though this model allows
the creation of multiple kernel threads, true concurrency cannot be achieved by this model.
This is because the kernel can schedule only one process at a time.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
2.4.5 Thread Libraries
A thread library provides the programmer an API for creating and managing threads. There are
two primary ways of implementing a thread library.
 The first approach is to provide a library entirely in user space with no kernel
support. All code and data structures for the library exist in user space. This
means that invoking a function in the library results in a local function call in user
space and not a system call.
 The second approach is to implement a kernel-level library supported directly by
the OS. In this case, code and data structures for the library exist in kernel space.
Invoking a function in the API for the library typically results in a system call to
the kernel.
Three main thread libraries are in use today:
 POSIX Pthreads: Pthreads, the threads extension of the POSIX standard, may be
provided as either a user- or kernel-level library. Pthreads library is often implemented
at LINUX, UNIX, Solaris, Mac OSX. The Pthread program must always have
a pthread.h header file.
 Win32: To create a thread using the Win32 library always include windows.h header file
in the program. The Win32 thread library is a kernel-level library which means invoking
the Win32 library function results in a system call
 Java: The Java thread API allows thread creation and management directly in Java
programs. However, because in most instances the JVM is running on top of a host OS,
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
the Java thread API is typically implemented using a thread library available on the host
system
Process vs Thread:
Process Thread
1. Process can't share the memory. 1. Threads can share memory and
files.
2. In process, execution is very slow. 2. In thread, execution is very fast.
3. It takes more time to create a
process.
3. It takes less time to create a
process.
4. It takes more time to complete the
execution and terminate.
4. It takes less time to complete
the execution and terminate.
5. Process is loosely coupled. 5. Theads are tightly coupled.
6. Processes are not suitable for
parallel activities.
6. Threads are suitable for parallel
activities.
7. System calls are required to
communicate each other.
7. System calls are not required.
8.Implementing the communication
between processes is difficult.
8. Communication between two
threads are very easy.
9.Process is heavy weight or resource
intensive.
9.Thread is light weight, taking
lesser resources than a process.
10.Process switching needs interaction
with operating system.
10.Thread switching does not need
to interact with operating system.
11.In multiple processing
environments, each process executes
the same code but has its own
memory and file resources.
11.All threads can share same set
of open files, child processes.
12.If one process is blocked, then no
other process can execute until the
first process is unblocked.
12.While one thread is blocked and
waiting, a second thread in the
same task can run.
13.Multiple processes without using
threads use more resources.
13.Multiple threaded processes
use fewer resources.
14.In multiple processes each process
operates independently of the others.
15.One thread can read, write or
change another thread's data.
Operating System Chapter 2. Process Management
Indira College of Commerce and Science Madhavi Avhankar
Operating System Chapter 3 Process Scheduling
Indira College of Commerce and Science Madhavi Avhankar
Chapter 3 Process Scheduling
3.1 Basic Concept – CPU-I/O burst cycle, Scheduling Criteria ,CPU scheduler,
Preemptive scheduling, Dispatcher
3.2 Scheduling Algorithms – FCFS, SJF, Priority scheduling, Round-robin
scheduling, Multiple queue scheduling, Multilevel feedback queue scheduling
3.1 Basic Concept
CPU Scheduling
CPU scheduling is the basis of multiprogramming operating systems. The objective of
multiprogramming is to have some process running at all times, in order to maximize CPU
utilization. Scheduling is a fundamental operating-system function. Almost all computer
resources are scheduled before use.
3.1.1 CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate
between these two states. Process execution begins with a CPU burst. That is followed by
an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last
CPU burst will end with a system request to terminate execution, rather than with another
I/O burst.
Mostly, A CPU burst of performing calculations, and an I/O burst, waiting for data transfer in
or out of the system.
Fig. Alternate sequence of CPU and I/O bursts
Operating System Chapter 3 Process Scheduling
Indira College of Commerce and Science Madhavi Avhankar
3.1.2 Scheduling Criteria
Scheduling Criteria Many criteria have been suggested for comparing CPU-scheduling
algorithms. The criteria include the following:
 CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may range
from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily used system).
 Throughput: If the CPU is busy executing processes, then work is being done. One
measure of work is the number of processes completed per time unit, called throughput.
For long processes, this rate may be 1 process per hour; for short transactions, throughput
might be 10 processes per second.
 Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
 Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.
 Response time: In an interactive system, turnaround time may not be the best criterion.
Another measure is the time from the submission of a request until the first response is
produced. This measure, called response time, is the amount of time it takes to start
responding, but not the time that it takes to output that response.
It is desirable to maximize CPU utilization and throughput and to minimize turnaround
time, waiting time, and response time.
Optimization Criteria
■ Max CPU utilization
■ Max throughput
■ Min turnaround time
■ Min waiting time
■ Min response time
3.1.3 CPU Scheduler
Whenever the CPU becomes idle, the operating system must select one of the processes in
the ready queue to be executed. The selection process is carried out by the short-term
scheduler (or CPU scheduler).
The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a FIFO queue,
a priority queue, a tree, or simply an unordered linked list.
Preemptive Scheduling CPU scheduling decisions may take place under the following four
circumstances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state
4. When a process terminates
Operating System Chapter 3 Process Scheduling
Indira College of Commerce and Science Madhavi Avhankar
Under 1 & 4 scheduling scheme is non preemptive. Otherwise the scheduling scheme is
preemptive. Under non preemptive scheduling, once the CPU has been allocated a process,
the process keeps the CPU until it releases the CPU either by termination or by switching to
the waiting state. This scheduling method is used by the Microsoft windows environment.
3.1.4 Dispatcher
Dispatcher module gives control of the CPU to the process selected by the short-term
scheduler; this involves:
✦switching context
✦switching to user mode
✦jumping to the proper location in the user program to restart that program
■ Dispatch latency – time it takes for the dispatcher to stop one process and start another
running.
3.2 Scheduling Algorithms
Scheduling algorithms are used to solve the problem of deciding the set of the processes in
the ready queue that has to be allocated the CPU time. In simple terms, scheduling
Operating System Chapter 3 Process Scheduling
Indira College of Commerce and Science Madhavi Avhankar
algorithms are used to schedule OS process on CPU processor time. Types or list of
scheduling algorithms are:
 First Come First Served (FCFS) Scheduling
 Shortest Job First(SJF) Scheduling
 Priority Scheduling
 Round Robin Scheduling
 Multilevel Queue Scheduling
Operating System Chapter 4 Introduction to Distributed OS and Architecture
Indira College of Commerce and Science Madhavi Avhankar
Distribution System definition:
General structure of Distributed System
From the point of view of a specific processor in a distributed system, the rest of the processors and
their respective resources are remote, whereas its own resources are local.
The processors in a distributed system may vary in function and size. They may include small
microprocessors, workstations, minicomputers and large general purpose computer systems.
These processors are referred to by a number of names, such as sites, nodes, computers, machines
and hosts, depending on the context in which they are mentioned.
Site word is to indicate the location of a machine and host to refer to a specific system at a site.
Generally, one host at one site, the server, has a resource that another host at another site, the
client (or user) would like to use.
Any implementation of a distributed computing model(an abstract view of a system) must involve
the implementation of processes, message links, routing schemes and timings.
The main purpose of the Distributed system is to enable users to access long – distances resources
and share the resources like a text, a picture, a voice, a video and so on with other users in
controlled way.
A distributed system in its simplest definition is a group of computers working together as to
appear as a single computer to the end-user.
A distributed computing system is a collection of processors interconnected by a communication
network in which each processor has its own local memory and other peripherals, and the
communication between any two processors of the system takes place by message passing over
the communication network. For a particular processor, its own resources are local, whereas the
other processors and their resources are remote
A distributed operating system is one that looks to its users like an ordinary centralized operating
system but runs on multiple, independent central processing units (CPUs). The key concept here
is transparency. In other words, the use of multiple processors should be invisible (transparent)
to the user. Another way of expressing the same idea is to say that the user views the system as a
"virtual uniprocessor," not as a collection of distinct machines
Operating System Chapter 4 Introduction to Distributed OS and Architecture
Indira College of Commerce and Science Madhavi Avhankar
Advantages of Distributed Systems
Data/Resource sharing
The distributed system enables a component to share data easily with other components of the
system. This is possible due to the fact that in a distributed system, nodes are interconnected for
collaboration purposes.
Scalability
Scalability means that we can change the size and extent of a particular system. Distributed Systems
provide unmatched scalability as we can easily add more nodes in a particular network.
Failure handling
A distributed system doesn’t depend on a single node. So, even if there is a single node
malfunctioning, other nodes continue to function properly. Thus, the system is intact.
Reliability
For a system to be reliable, it should handle errors efficiently. As distributed systems easily handle
system crashes, they are quite reliable.
Efficiency
Distributed systems are highly efficient as they involve multiple computers that save time for users.
Also, they can provide higher performance as compared to centralized systems.
Lesser delay
In today’s world, time is an important constraint. Distributed Systems provide a low latency rate. For
example, consider a user who uses the internet and loads a website. The system makes sure that the
node located closer to the user is used to perform the loading task in order to save time.
Disadvantages of Distributed Systems
Security issue
Security issues usually occur in many software and hardware devices. The same case is with
Distributed Systems. Such security risks occur as a result of many nodes and connections in an open
system setting that makes it difficult to ensure adequate security.
High set-up cost
The initial cost of installation and set-up is high due to many hardware and software devices. There
are other maintenance costs associated with the system which adds to the total cost, making it even
more expensive.
Data loss
There can be instances when the data sent from one node to another node can be lost midway in its
journey from the source node to the destination node.
Difficult to handle
The hardware and software of a distributed system are quite complex. It’s complicated to maintain
and operate the hardware components . Also, software complexity makes it necessary to pay special
attention to the software components.
Overloading issue
The Overloading issue can occur in the system if all the nodes of the distributed system try to send
data at one particular instant of time.
Design goals of Distributed system:
1.Making Resources Accessible:-
The main goal of a distributed system is to make it easy for the users (and applications) to access
remote resources, and to share them in a controlled and efficient way.
Resources - typical examples include things like printers, computers, storage facilities, data, files,
Web pages, and networks etc.
There are many reasons to the share resources.
Operating System Chapter 4 Introduction to Distributed OS and Architecture
Indira College of Commerce and Science Madhavi Avhankar
One obvious reason is that of economics.
For example, it is cheaper to let a printer be shared by several users in a small office than having to
buy and maintain a separate printer for each user. Likewise, it makes economic sense to share costly
resources such as supercomputers, high-performance storage systems, image setters, and other
expensive peripherals.
2. Transparency :
One of the main goals of a distributed operating system is to make the existence of multiple
computers invisible (transparent) and provide a single system image to its users. That is, a
distributed operating system must be designed in such a way that a collection of distinct machines
connected by a communication subsystem appears to its users as a virtual uniprocessor.
There are seven forms of transparency of distributed operating system –
2.1 Access Transparency:
Access transparency means that users should not need or be able to recognize whether a resource
(hardware or software) is remote or local. This implies that the distributed operating system should
allow users to access remote resources in the same way as local resources. That is, the user
interface, which takes the form of a set of system calls, should not distinguish between local and
remote resources, and it should be the responsibility of the distributed operating system to locate
the resources and to arrange for servicing user requests in a user-transparent manner
2.2 Location Transparency:
The two main aspects of location transparency are as follows:
1. Name transparency: This refers to the fact that the name of a resource (hardware or
software) should not reveal any hint as to the physical location of the resource. That is, the
name of a resource should be independent of the physical connectivity or topology of the
system or the current location of the resource. Furthermore, such resources, which are
capable of being moved from one node to another in a distributed system (such as a file),
must be allowed to move without having their names changed. Therefore, resource names
must be unique system wide.
2. User mobility: This refers to the fact that no matter which machine a user is logged onto,
he or she should be able to access a resource with the same name. That is, the user should
not be required to use different names to access the same resource from two different
nodes of the system. In a distributed system that supports user mobility, users can freely log
on to any machine in the system and access any resource without making any extra effort.
2.3 Replication Transparency:
1. For better performance and reliability, almost all distributed operating systems have the
provision to create replicas (additional copies) of files and other resources on different
nodes of the distributed system. In these systems, both the existence of multiple copies of a
replicated resource and the replication activity should be transparent to the users.
2. That is, two 'important issues related to replication transparency are naming of replicas
and replication control.
3. It is the responsibility of the system to name the various copies of a resource and to map a
user-supplied name of the resource to an appropriate replica of the resource.
4. Furthermore, replication control decisions such as how many copies of the resource should
be created, where should each copy be placed, and when should a copy be created/deleted
should be made entirely automatically by the system in a user-transparent manner
2.4 Failure Transparency:
1. Failure transparency deals with masking from the users' partial failures in the system, such
as a communication link failure, a machine failure, or a storage device crash etc.
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf
operating system notes by madhavi.pdf

Más contenido relacionado

La actualidad más candente

OS Memory Management
OS Memory ManagementOS Memory Management
OS Memory Managementanand hd
 
Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and AlgorithmDhaval Kaneria
 
Computer organization & architecture chapter-1
Computer organization & architecture chapter-1Computer organization & architecture chapter-1
Computer organization & architecture chapter-1Shah Rukh Rayaz
 
Computer models and simulations
Computer models and simulationsComputer models and simulations
Computer models and simulationsMirza Ćutuk
 
Applications of paralleL processing
Applications of paralleL processingApplications of paralleL processing
Applications of paralleL processingPage Maker
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)WajeehaBaig
 
Operating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemOperating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemVaibhav Khanna
 
Computer Organisation & Architecture (chapter 1)
Computer Organisation & Architecture (chapter 1) Computer Organisation & Architecture (chapter 1)
Computer Organisation & Architecture (chapter 1) Subhasis Dash
 
INSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMINSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMKamran Ashraf
 
Hardware multithreading
Hardware multithreadingHardware multithreading
Hardware multithreadingFraboni Ec
 
Evaluation of morden computer & system attributes in ACA
Evaluation of morden computer &  system attributes in ACAEvaluation of morden computer &  system attributes in ACA
Evaluation of morden computer & system attributes in ACAPankaj Kumar Jain
 
Round robin scheduling
Round robin schedulingRound robin scheduling
Round robin schedulingRaghav S
 
Computer registers
Computer registersComputer registers
Computer registersJatin Grover
 
Computer Organization Lecture Notes
Computer Organization Lecture NotesComputer Organization Lecture Notes
Computer Organization Lecture NotesFellowBuddy.com
 

La actualidad más candente (20)

OS Memory Management
OS Memory ManagementOS Memory Management
OS Memory Management
 
Introduction to data structures and Algorithm
Introduction to data structures and AlgorithmIntroduction to data structures and Algorithm
Introduction to data structures and Algorithm
 
Deadlocks in operating system
Deadlocks in operating systemDeadlocks in operating system
Deadlocks in operating system
 
COA Complete Notes.pdf
COA Complete Notes.pdfCOA Complete Notes.pdf
COA Complete Notes.pdf
 
Computer organization & architecture chapter-1
Computer organization & architecture chapter-1Computer organization & architecture chapter-1
Computer organization & architecture chapter-1
 
Computer models and simulations
Computer models and simulationsComputer models and simulations
Computer models and simulations
 
Applications of paralleL processing
Applications of paralleL processingApplications of paralleL processing
Applications of paralleL processing
 
lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)lecture 1 (Introduction to Operating System.)
lecture 1 (Introduction to Operating System.)
 
Functional units
Functional unitsFunctional units
Functional units
 
Data Hazard and Solution for Data Hazard
Data Hazard and Solution for Data HazardData Hazard and Solution for Data Hazard
Data Hazard and Solution for Data Hazard
 
Operating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating systemOperating system 08 time sharing and multitasking operating system
Operating system 08 time sharing and multitasking operating system
 
Computer Organisation & Architecture (chapter 1)
Computer Organisation & Architecture (chapter 1) Computer Organisation & Architecture (chapter 1)
Computer Organisation & Architecture (chapter 1)
 
INSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISMINSTRUCTION LEVEL PARALLALISM
INSTRUCTION LEVEL PARALLALISM
 
Hardware multithreading
Hardware multithreadingHardware multithreading
Hardware multithreading
 
Evaluation of morden computer & system attributes in ACA
Evaluation of morden computer &  system attributes in ACAEvaluation of morden computer &  system attributes in ACA
Evaluation of morden computer & system attributes in ACA
 
Memory organization
Memory organizationMemory organization
Memory organization
 
Round robin scheduling
Round robin schedulingRound robin scheduling
Round robin scheduling
 
Data link layer
Data link layer Data link layer
Data link layer
 
Computer registers
Computer registersComputer registers
Computer registers
 
Computer Organization Lecture Notes
Computer Organization Lecture NotesComputer Organization Lecture Notes
Computer Organization Lecture Notes
 

Similar a operating system notes by madhavi.pdf

Operating system
Operating systemOperating system
Operating systemIbrahim MH
 
Operting system
Operting systemOperting system
Operting systemKAnurag2
 
NE223_chapter 1_Overview of operating systems.ppt
NE223_chapter 1_Overview of operating systems.pptNE223_chapter 1_Overview of operating systems.ppt
NE223_chapter 1_Overview of operating systems.pptMemMem25
 
Operating System and related questions
Operating System and related questionsOperating System and related questions
Operating System and related questionsimdurgesh
 
Operating System Unit 1
Operating System Unit 1Operating System Unit 1
Operating System Unit 1bhartigole1
 
Ch1kiit [compatibility mode]
Ch1kiit [compatibility mode]Ch1kiit [compatibility mode]
Ch1kiit [compatibility mode]Amit Gupta
 
A Survey On Operating System Challenges And Security Issues Associate To It
A Survey On Operating System Challenges And Security Issues Associate To ItA Survey On Operating System Challenges And Security Issues Associate To It
A Survey On Operating System Challenges And Security Issues Associate To ItMichele Thomas
 
Introduction and Types of Operating System.pptx
Introduction and Types of Operating System.pptxIntroduction and Types of Operating System.pptx
Introduction and Types of Operating System.pptxaparna14patil
 
LM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system callsLM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system callsmanideepakc
 
Operating System Lecture Notes
Operating System Lecture NotesOperating System Lecture Notes
Operating System Lecture NotesFellowBuddy.com
 
Basic operating systems in computer and it's uses
Basic operating systems in computer and it's usesBasic operating systems in computer and it's uses
Basic operating systems in computer and it's usesSurya Vishnuram
 
Operating system Concepts
Operating system Concepts Operating system Concepts
Operating system Concepts RANVIJAY GAUR
 

Similar a operating system notes by madhavi.pdf (20)

Operating system
Operating systemOperating system
Operating system
 
Operting system
Operting systemOperting system
Operting system
 
NE223_chapter 1_Overview of operating systems.ppt
NE223_chapter 1_Overview of operating systems.pptNE223_chapter 1_Overview of operating systems.ppt
NE223_chapter 1_Overview of operating systems.ppt
 
Introduction to OS 1.ppt
Introduction to OS 1.pptIntroduction to OS 1.ppt
Introduction to OS 1.ppt
 
ch1.ppt
ch1.pptch1.ppt
ch1.ppt
 
Operating System and related questions
Operating System and related questionsOperating System and related questions
Operating System and related questions
 
Operating System Unit 1
Operating System Unit 1Operating System Unit 1
Operating System Unit 1
 
Operating System
Operating SystemOperating System
Operating System
 
Ch1kiit [compatibility mode]
Ch1kiit [compatibility mode]Ch1kiit [compatibility mode]
Ch1kiit [compatibility mode]
 
A Survey On Operating System Challenges And Security Issues Associate To It
A Survey On Operating System Challenges And Security Issues Associate To ItA Survey On Operating System Challenges And Security Issues Associate To It
A Survey On Operating System Challenges And Security Issues Associate To It
 
Operating system notes pdf
Operating system notes pdfOperating system notes pdf
Operating system notes pdf
 
Introduction and Types of Operating System.pptx
Introduction and Types of Operating System.pptxIntroduction and Types of Operating System.pptx
Introduction and Types of Operating System.pptx
 
Unit 1 q&a
Unit  1 q&aUnit  1 q&a
Unit 1 q&a
 
Operating System Lecture 1
Operating System Lecture 1Operating System Lecture 1
Operating System Lecture 1
 
OS.pptx
OS.pptxOS.pptx
OS.pptx
 
LM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system callsLM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system calls
 
Ch1
Ch1Ch1
Ch1
 
Operating System Lecture Notes
Operating System Lecture NotesOperating System Lecture Notes
Operating System Lecture Notes
 
Basic operating systems in computer and it's uses
Basic operating systems in computer and it's usesBasic operating systems in computer and it's uses
Basic operating systems in computer and it's uses
 
Operating system Concepts
Operating system Concepts Operating system Concepts
Operating system Concepts
 

Último

Creating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening DesignsCreating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening DesignsNurulAfiqah307317
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPirithiRaju
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...ssuser79fe74
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsSérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPirithiRaju
 
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Monika Rani
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfSumit Kumar yadav
 
Seismic Method Estimate velocity from seismic data.pptx
Seismic Method Estimate velocity from seismic  data.pptxSeismic Method Estimate velocity from seismic  data.pptx
Seismic Method Estimate velocity from seismic data.pptxAlMamun560346
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and ClassificationsAreesha Ahmad
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSérgio Sacani
 
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...Sérgio Sacani
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsSérgio Sacani
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...Lokesh Kothari
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...RohitNehra6
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoSérgio Sacani
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )aarthirajkumar25
 

Último (20)

CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
Creating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening DesignsCreating and Analyzing Definitive Screening Designs
Creating and Analyzing Definitive Screening Designs
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
Chemical Tests; flame test, positive and negative ions test Edexcel Internati...
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
Vip profile Call Girls In Lonavala 9748763073 For Genuine Sex Service At Just...
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdf
 
Seismic Method Estimate velocity from seismic data.pptx
Seismic Method Estimate velocity from seismic  data.pptxSeismic Method Estimate velocity from seismic  data.pptx
Seismic Method Estimate velocity from seismic data.pptx
 
Bacterial Identification and Classifications
Bacterial Identification and ClassificationsBacterial Identification and Classifications
Bacterial Identification and Classifications
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
 
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune WaterworldsBiogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
Biogenic Sulfur Gases as Biosignatures on Temperate Sub-Neptune Waterworlds
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
GUIDELINES ON SIMILAR BIOLOGICS Regulatory Requirements for Marketing Authori...
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )
 

operating system notes by madhavi.pdf

  • 1. Operating System Prepared by, Madhavi S. Avhankar, Assistant Professor, Indira College of Commerce and Science, Pune
  • 2. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Chapter 1 Introduction to Operating Systems [6] 1.1 Operating Systems Overview- system Overview and Functions of operating systems 1.2 What does an OS do? 1.3 Operating system Operations 1.4 Operating system structure 1.5 Protection and security 1.6 Computing Environments- Traditional, mobile , distributed, Client/server, peer to peer computing 1.7 Open source operating System 1.8 Booting 1.9 Operating System services 1.10 System calls Types of System calls and their working Introduction: Every general purpose computer consists of hardware, operating system, system programs, application programs. The hardware consists of memory, CPU, ALU, I/O devices, peripheral devices and storage devices. System program consists of compilers, loaders , editors, OS etc. The application program consists of business program, database program. 1.1 Operating Systems Overview- system Overview and Functions of operating systems Defining Operating System: A computer system has many resources (hardware and software), which may be require to complete a task. The commonly required resources are input/output devices, memory, file storage space, CPU etc. The operating system acts as a manager of the above resources and allocates them to specific programs and users, whenever necessary to perform a particular task. Therefore operating system is the resource manager i.e. it can manage the resource of a computer system internally. The resources are processor, memory, files, and I/O devices. In simple terms, an operating system is the interface between the user and the machine. Functions of Operating System 1. It boots the computer 2. It performs basic computer tasks e.g. managing the various peripheral devices e.g. mouse, keyboard An operating system is a system software that provides an environment for the user to interact with computer resources. Operating system is a program running at all times on the computer and also called as a kernel. Operating system provides various services to the user to use the computer system more efficiently
  • 3. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar 3. It provides a user interface, e.g. command line, graphical user interface (GUI) 4. It handles system resources such as computer's memory and sharing of the central processing unit(CPU) time by various applications or peripheral devices. 5. It provides file management which refers to the way that the operating system manipulates, stores, retrieves and saves data. 6. Error Handling is done by the operating system. It takes preventive measures whenever required to avoid errors. 1.2 What does an OS do? Computer system can be divided into four major components 1) Computer hardware 2) Operating System 3) Application Software/Program 4) User Fig. Abstract view of computer system components The most important program that runs on a computer is operating system. Every computer must have an operating system to run other programs. The operating system controls and coordinates the use of the hardware among the various system
  • 4. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar programs and application program for a various users. It simply provides an environment within which other program can do useful work. The role of operating system from user view and the system view is as below: 1.2.1 User View The user view depends on the system interface that is used by the users. The different types of user view experiences can be explained as follows −  If the user is using a personal computer, the operating system is largely designed to make the interaction easy. Some attention is also paid to the performance of the system, but there is no need for the operating system to worry about resource utilization. This is because the personal computer uses all the resources available and there is no sharing.  If the user is using a system connected to a mainframe or a minicomputer, the operating system is largely concerned with resource utilization. This is because there may be multiple terminals connected to the mainframe and the operating system makes sure that all the resources such as CPU, memory, I/O devices etc. are divided uniformly between them.  If the user is sitting on a workstation connected to other workstations through networks, then the operating system needs to focus on both individual usage of resources and sharing though the network. This happens because the workstation exclusively uses its own resources but it also needs to share files etc. with other workstations across the network.  If the user is using a handheld computer such as a mobile, then the operating system handles the usability of the device including a few remote operations. The battery level of the device is also taken into account. There are some devices that contain very less or no user view because there is no interaction with the users. Examples are embedded computers in home devices, automobiles etc. 1.2.2 System View According to the computer system, the operating system is the bridge between applications and hardware. It is most intimate with the hardware and is used to control it as required. The different types of system view for operating system can be explained as follows:  The system views the operating system as a resource allocator. There are many resources such as CPU time, memory space, file storage space, I/O devices etc. that are required by processes for execution. It is the duty of the operating system to allocate these resources judiciously to the processes so that the computer system can run as smoothly as possible.  The operating system can also work as a control program. It manages all the processes and I/O devices so that the computer system works smoothly and
  • 5. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar there are no errors. It makes sure that the I/O devices work in a proper manner without creating problems.  Operating systems can also be viewed as a way to make using hardware easier.  Computers were required to easily solve user problems. However it is not easy to work directly with the computer hardware. So, operating systems were developed to easily communicate with the hardware.  An operating system can also be considered as a program running at all times in the background of a computer system (known as the kernel) and handling all the application programs. This is the definition of the operating system that is generally followed. 1.3 Operating-System Operations As mentioned earlier, modern operating systems are interrupt driven. If there are no processes to execute, no I/O devices to service, and no users to whom to respond, an operating system will sit quietly, waiting for something to happen. Events are almost always signalled by the occurrence of an interrupt or a trap. A trap (or an exception)is a software-generated interrupt caused either by an error (for example , division by zero or invalid memory access) or by a specific request from a user program that an operating-system service be performed. The interrupt- driven nature of an operating system defines that system's general structure. For each type of interrupt, separate segments of code in the operating system determine what action should be taken. An interrupt service routine is provided that is responsible for dealing with the interrupt. 1.3.1 Dual-Mode Operation In order to ensure the proper execution of the operating system, we must be able to distinguish between the execution of operating-system code and user defined code. The approach taken by most computer systems is to provide hardware support that allows us to differentiate among various modes of execution. At the very least, we need two separate modes of operation: user mode and kernel mode (also called supervisor mode, system mode, or privileged mode). A bit, called the mode bit, is added to the hardware of the computer to indicate the current mode: kernel (0) or user (1). When the system is executing user instructions, the system is in user mode. When the user executed any system call to get the operating system services (OS Code) the system switched the user mode to kernel mode With the mode bit, one are able to distinguish between a task that is executed on behalf of the operating system and one that is executed on behalf of the user.  When the computer system is executing on behalf of a user application, the system is in user mode.
  • 6. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar  However, when a user application requests a service from the operating system (via a system call), it must transition from user to kernel mode to fulfill the request.  At system boot time, the hardware starts in kernel mode.  The operating system is then loaded and starts user applications in user mode.  Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel mode (that is, changes the state of the mode bit to 0).  Thus, whenever the operating system gains control of the computer, it is in kernel mode.  The system always switches to user mode (by setting the mode bit to 1) before passing control to a user program. Fig. Transition from user mode to kernel mode The dual mode of operation provides us with the means for protecting the operating system from errant users—and errant users from one another. This protection is achieved by designating some of the special machine instructions as privileged instructions. The hardware allows privileged instructions to be executed only in kernel mode. If an attempt is made to execute a privileged instruction in user mode, the hardware does not execute the instruction but rather treats it as illegal and traps it to the operating system. The instruction to switch to user mode is an example of a privileged instruction. Some other examples include I/O control, timer management, and interrupt management. System calls are treated as privileged instructions. When system call is executed, it is treated as an interrupt. Control passes to the appropriate interrupt service routine in operating system and mode bit is set to kernel mode. User process executing Calls system call Return from system call Kernel Execute system call Return Mode bit = 1 Trap Mode bit = 0 User Process User mode (mode bit =1) Kernel mode (mode bit = 0)
  • 7. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar 1.3.2 Timer Operating system must maintains control over the CPU. System must prevent a user program from getting stuck in an infinite loop or not calling system services and never returning control to the operating system. To accomplish this goal, the concept of timer can be used. A timer can be set to interrupt the computer after a specified period. The period may be fixed (for example, 1/60 second) or variable (for example, from 1 millisecond to 1 second). A variable timer is generally implemented by a fixed-rate clock and a counter. The operating system sets the counter. Every time the clock ticks, the counter is decremented. When the counter reaches 0, an interrupt occurs. For instance, a 10- bit counter with a 1-millisecond clock allows interrupts at intervals from 1 millisecond to 1,024 milliseconds, in steps of 1 millisecond. Before turning over control to the user, the operating system ensures that the timer is set to interrupt. If the timer interrupts, control transfers automatically to the operating system, which may treat the interrupt as a fatal error or may give the program more time. Clearly, instructions that modify the content of the timer are privileged. Thus, we can use the timer to prevent a user program from running too long. A simple technique is to initialize a counter with the amount of time that a program is allowed to run. A program with a 7-minute time limit, for example, would have its counter initialized to 420. Every second, the timer interrupts and the counter is decremented by 1. As long as the counter is positive, control is returned to the user program. When the counter becomes negative, the operating system terminates the program for exceeding the assigned time limit. 1.4 Operating system structure For efficient performance and implementation an OS should be partitioned into separate subsystems, each with carefully defined tasks, inputs, outputs, and performance characteristics. These subsystems can then be arranged in various architectural configurations: Simple structure: Such operating systems do not have well defined structure and are small, simple and limited systems. The interfaces and levels of functionality are not well separated. MS-DOS is an example of such operating system. In MS-DOS application programs are able to access the basic I/O routines. These types of operating system cause the entire system to crash if one of the user programs fails. Diagram of the structure of MS-DOS is shown below.
  • 8. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Advantages of Simple structure: It delivers better application performance because of the few interfaces between the application program and the hardware. Easy for kernel developers to develop such an operating system. Disadvantages of Simple structure: The structure is very complicated as no clear boundaries exists between modules. It does not enforce data hiding in the operating system. Layered structure: An OS can be broken into pieces and retain much more control on system. In this structure the OS is broken into number of layers (levels). The bottom layer (layer 0) is the hardware and the topmost layer (layer N) is the user interface. These layers are so designed that each layer uses the functions of the lower level layers only. This simplifies the debugging process as if lower level layers are debugged and an error occurs during debugging then the error must be on that layer only as the lower level layers have already been debugged. The main disadvantage of this structure is that at each layer, the data needs to be modified and passed on which adds overhead to the system. Moreover careful planning of the layers is necessary as a layer can use only lower level layers. UNIX is an example of this structure.
  • 9. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Advantages of Layered structure: Layering makes it easier to enhance the operating system as implementation of a layer can be changed easily without affecting the other layers. It is very easy to perform debugging and system verification. Disadvantages of Layered structure: In this structure the application performance is degraded as compared to simple structure. It requires careful planning for designing the layers as higher layers use the functionalities of only the lower layers. Micro-kernel: This structure design s the operating system by removing all non-essential components from the kernel and implementing them as system and user programs. This result in a smaller kernel called the micro-kernel. Advantages of this structure are that all new services need to be added to user space and does not require the kernel to be modified. Thus it is more secure and reliable as if a service fails then rest of the operating system remains untouched. Mac OS is an example of this type of OS.
  • 10. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Advantages of Micro-kernel structure: It makes the operating system portable to various platforms. As microkernels are small so these can be tested effectively. Disadvantages of Micro-kernel structure: Increased level of inter module communication degrades system performance. Modular structure or approach: It is considered as the best approach for an OS. It involves designing of a modular kernel. The kernel has only set of core components and other services are added as dynamically loadable modules to the kernel either during run time or boot time. It resembles layered structure due to the fact that each kernel has defined and protected interfaces but it is more flexible than the layered structure as a module can call any other module. For example Solaris OS is organized as shown in the figure.
  • 11. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar 1.5 Protection and security If a computer system has multiple users and allows the concurrent execution of multiple processes, then access to data must be regulated. For that purpose, mechanisms ensure that files, memory segments, CPU, and other resources can be operated on by only those processes that have gained proper authorization from the operating system. For example, memory-addressing hardware ensures that a process can execute only within its own address space. The timer ensures that no process can gain control of the CPU without eventually relinquishing control. Device-control registers are not accessible to users, so the integrity of the various peripheral devices is protected. Protection, then, is any mechanism for controlling the access of processes or users to the resources defined by a computer system. Protection can improve reliability by detecting errors at the interfaces between component subsystems. Furthermore, an unprotected resource cannot defend against use (or misuse) by an unauthorized or incompetent user. A protection- oriented system provides a means to distinguish between authorized and unauthorized usage. System can have adequate protection but still be prone to failure and allow inappropriate access. Consider a user whose authentication information (username and password) is stolen. Her data could be copied or deleted, even though file and memory protection are working. It is the job of security to defend a system from external and internal attacks. Such attacks spread across a huge range and include viruses and worms, denial-of-service attacks (which use all of a system’s resources and so keep legitimate users out of the system), identity theft, and theft of service (unauthorized use of a system). Prevention of some of these attacks is considered an operating-system function on some systems, while other systems leave it to policy or additional software. Due to the alarming rise in security incidents, operating-system security features represent a fast-growing area of research and implementation. Most operating systems maintain a list of user names and associated user identifiers(userIDs). In Windows , this is a security ID(SID). These numerical IDs are unique, one per user. When a user logs into the system, the authentication stage determines the appropriate user ID for the user. That user ID is associated with all of the user’s processes and threads. In some circumstances, the owner of a file on a UNIX system may be allowed to issue all operations on that file, whereas a selected set of users maybe allowed only to read the file. To accomplish this, we need to define a group name and the set of users belonging to that group. Group functionality can be implemented as a system-wide list of group names and group identifiers. A user can be in one or more groups, depending on operating-system design decisions. The user may need access to a device that is restricted, for example. Operating systems provide various methods to allow privilege escalation. On UNIX, for instance, the set uid attribute on a program causes that program to run with the
  • 12. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar user ID of the owner of the file, rather than the current user ’sID. The process runs with this effective UID until it turns off the extra privileges or terminates. 1.6 Computing Environments There are different ways of how computer systems are used in computing environments. Some commonly used environments are as below: 1.6.1 Traditional Computing Just a few years ago, computing environment consisted of PCs connected to a network, with servers providing file and print services. Remote access was awkward, and portability was achieved by use of laptop computers. Terminals attached to mainframes were widespread at many companies. The current trend is toward providing more ways to access these computing environments. Web technologies and increasing WAN bandwidth are stretching the boundaries of traditional computing. Companies establish portals, which provide Web accessibility to their internal servers. Network computers (or thin clients)—which are essentially terminals that understand web- based computing—are used in place of traditional workstations where more security or easier maintenance is desired. Mobile computers can synchronize with PCs to allow very portable use of company information. Mobile computers can also connect to wireless networks and cellular data networks to use the company’s Web portal (as well as the myriad other Web resources). At home, most users once had a single computer with a slow modem connection to the office, the Internet, or both. Today, network-connection speeds once available only at great cost are relatively inexpensive in many places, giving home users more access to more data. These fast data connections are allowing home computers to serve up Web pages and to run networks that include printers, client PCs, and servers. Many homes use firewalls to protect their networks from security breaches. To optimize the use of the computing resources, multiple users shared time on these systems. Time-sharing systems used a timer and scheduling algorithms to cycle processes rapidly through the CPU, giving each user a share of the resources. 1.6.2 Client-Server Computing In this type of system centralized system architecture is used. Terminals are connected to centralized systems of high speed and capacity servers. Server systems satisfy the requests generated by client systems. This form of specialized distributed system, called a client–server system. Following fig. shows the general structure of client server operating system –
  • 13. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Server systems can be broadly categorized as compute servers and file servers: • The compute-server system provides an interface to which a client can send a request to perform an action (for example, read data). In response, the server executes the action and sends the results to the client. A server running a database that responds to client requests for data is an example of such a system. • The file-server system provides a file-system interface where clients can create, update, read, and delete files. An example of such a system is a web server that delivers files to clients running web browsers. 1.6.3 Peer-to-Peer Computing Another structure for a distributed system is the peer-to-peer (P2P) system model. In this model, clients and servers are not distinguished from one another. Instead, all nodes within the system are considered peers, and each may act as either a To participate in a peer-to-peer system, a node must first join the network of peers. Once a node has joined the network, it can begin providing services to—and requesting services from—other nodes in the network. Determining what services are available is accomplished in one of two general ways: • When a node joins a network, it registers its service with a centralized lookup service on the network. Any node desiring a specific service first contacts this centralized lookup service to determine which node provides the service. The remainder of the communication takes place between the client and the service provider. • An alternative scheme uses no centralized lookup service. Instead, a peer acting as a client must discover what node provides a desired service by broadcasting a request for the service to all other nodes in the network. The node (or nodes) providing that service responds to the peer making the request. To support this approach, a discovery protocol must be provided that allows peers to discover services provided by other peers in the network. Following fig. shows the general structure of peer-to-peer computing.
  • 14. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar 1.6.4 Distributed Systems A distributed system is a collection of physically separate, possibly heterogeneous, computer systems that are networked to provide users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability. Some operating systems generalize network access as a form of file access. Others make users specifically invoke network functions. Generally, systems contain a mix of the two modes — for example FTP and NFS. The protocols that create a distributed system can greatly affect that system’s utility and popularity. A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality. Networks vary by the protocols used, the distances between nodes, and the transport media. TCP/IP is the most common network protocol, and it provides the fundamental architecture of the Internet. Reasons for distributed operating system/Advantages of distributed operating system: 1. Resource sharing: If the number of different sites are connected together then the user at one site may be able to access the resources available at other site. 2. Computation speedup: The particular computation can be partitioned into interdependent sub-computations which can run concurrently. 3. Reliability: In distributed system, if one site fails then the remaining site can continue operating and hence the reliability improved. 4. Communication: When various sites are connected together, user can exchange information with the help of communication network.
  • 15. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Fig. General structure of distributed system A network operating system is an operating system that provides features such as file sharing across the network, along with a communication scheme that allows different processes on different computers to exchange messages. A computer running a network operating system acts autonomously from all other computers on the network, although it is aware of the network and is able to communicate with other networked computers. A distributed operating system provides a less autonomous environment. The different computers communicate closely enough to provide the illusion that only a single operating system controls the network. Extras - Differences 1. Multiprocessor and clustered system Sr. No. Multiprocessor System Clustered System 1. Single system with more than one processor Multiple systems joined to act as one 2. Multiple CPUs share memory, bus and other peripheral devices Clustered computers have shared storage and are connected together using LAN and other faster network. 3. Provides low availability as compared to clustered system. If any computer Provides high availability of services i.e if one or more
  • 16. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar fails down then all processors available in that computer will not able to continue the work. computer in the cluster fails down, the system continues to work. 4. Cost is less compared to clustered system Cost is higher than multiprocessor system 5. Uses local storage Uses SAN (Storage Area Network) – Pool of storage. 2. Asymmetric and Symmetric multiprocessing Sr. No. Asymmetric Multiprocessing Symmetric Multiprocessing 1. In asymmetric multiprocessing, the processors are not treated equally. In symmetric multiprocessing, all the processors are treated equally. 2. Tasks of the operating system are done by master processor. Tasks of the operating system are done individual processor 3. No Communication between Processors as they are controlled by the master processor. All processors communicate with another processor by a shared memory. 4. In asymmetric multiprocessing, process are master-slave. In symmetric multiprocessing, the process is taken from the ready queue. 5. Asymmetric multiprocessing systems are cheaper. Symmetric multiprocessing systems are costlier. 6. Asymmetric multiprocessing systems are easier to design Symmetric multiprocessing systems are complex to design 3. Client -Server and Peer to peer computing Sr. No. Client-Server Computing Peer-to-Peer Computing 1. In Client-Server Network, Clients and server are differentiated, Specific server and clients are present. In Peer-to-Peer Network, Clients and server are not differentiated. 2. Client-Server Network focuses on information sharing. While Peer-to-Peer Network focuses on connectivity. 3. In Client-Server Network, Centralized server is used to store the data. While in Peer-to-Peer Network, Each peer has its own data. 4. In Client-Server Network, Server respond the services which is request by Client. While in Peer-to-Peer Network, Each and every node can do both request and respond for the services. 5. Client-Server Network are costlier than While Peer-to-Peer Network are less
  • 17. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Peer-to-Peer Network. costlier than Client-Server Network. 6. Client-Server Network are more stable than Peer-to-Peer Network. While Peer-to-Peer Network are less stable if number of peer is increase. 7. Client-Server Network is used for both small and large networks. While Peer-to-Peer Network is generally suited for small networks with fewer than 10 computers. client or a server, depending on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over traditional client-server systems. In a client-server system, the server is a bottleneck; but in a peer-to-peer system, services can be provided by several nodes distributed throughout the network. 1.7 Open source operating System The term "open source" refers to computer software or applications where the owners or copyright holders enable the users or third parties to use, see, and edit the product's source code. The source code of an open-source OS is publicly visible and editable. The usually operating systems such as Apple's iOS, Microsoft's Windows, and Apple's Mac OS are closed operating systems. Open-Source Software is licensed in such a way that it is permissible to produce as many copies as you want and to use them wherever you like. It generally uses fewer resources than its commercial counterpart because it lacks any code for licensing, promoting other products, authentication, attaching advertisements, etc. The open-source operating system allows the use of code that is freely distributed and available to anyone and for commercial purposes. Being an open-source application or program, the program source code of an open-source OS is available. The user may modify or change those codes and develop new applications according to the user requirement. Some basic examples of the open-source operating systems are Linux, Open Solaris, Free RTOS, Open BDS, Free BSD, Minix, etc. In 1997, the first Open-Source software was released. Despite the industry, there are now Open-Source alternatives for every Software program. Thanks to technological developments and innovations, many Open-Source Operating Systems have been developed since the dawn of the 21st century. 1.8 Booting After an operating system is generated, it must be available for the use by the hardware. But how the hardware knows where the kernel is, or how to load that kernel? The procedure of starting a computer by loading the kernel is known as Booting the system. Hence it needs a special program, stored in ROM to do this
  • 18. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar job known as the Bootstrap loader. Example: BIOS (boot input output system). A modern PC BIOS (Basic Input/Output System) supports booting from various devices. Typically, the BIOS will allow the user to configure a boot order. If the boot order is set to: CD Drive, Hard Disk Drive, Network Then the BIOS will try to boot from the CD drive first, and if that fails then it will try to boot from the hard disk drive, and if that fails then it will try to boot from the network, and if that fails then it won’t boot at all. Booting is a startup sequence that starts the operating system of a computer when it is turned on. A boot sequence is the initial set of operations that the computer performs when it is switched on. Every computer has a boot sequence. Bootstrap loader locates the kernel, loads it into main memory and starts its execution. In some systems, a simple bootstrap loader fetches a more complex boot program from disk, which in turn loads the kernel. Dual Booting: When two operating system are installed on the computer system then it is called dual booting. In fact multiple operating systems can be installed on such a system. But how system knows which operating system is to boot? A boot loader that understands multiple file systems and multiple operating system can occupy the boot space. Once loaded, it can boot one of the operating systems available on the disk. The disk can have multiple partitions, each containing a different type of operating system. When a computer system turn on, a boot manager program displays a menu, allowing user to choose the operating system to use. 1.9 Operating System services An operating system provides the environment within which programs are executed. An operating system provides an environment for the execution of programs. It provides certain services to programs and to the users of those programs. The specific services provided, differ from one operating system to
  • 19. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar another, but we can identify common classes. These operating system services are provided for the convenience of the programmer, to make the programming task easier. Following Figure shows one view of the various operating-system services and how they interrelate. Operating System Services: One set of operating system services provides functions that are helpful to the user.  User interface: Almost all operating systems have a user interface ( UI ). This interface can take several forms. One is a command-line interface ( CLI ), which uses text commands and a method for entering them (keyboard for typing in commands in a specific format with specific options). Another is a batch interface, in which commands and directives to control those commands are entered into files, and those files are executed. Most commonly, a graphical user interface ( GUI ) is used. Here, the interface is a window system with a pointing device to direct I/O , choose from menus, and make selections and a keyboard to enter text. Some systems provide two or all three of these variations.  Program execution: The system must be able to load a program into memory and to run that program. The program must be able to end its execution, either normally or abnormally (indicating error).  I/O operations: A running program may require I/O , which may involve a file or an I/O device. For specific devices, special functions may be desired (such as recording to a CD or DVD drive or blanking a display screen). For efficiency and protection, users usually cannot control I/O devices directly. Therefore, the operating system must provide a means to do I/O .  File-system manipulation: Many programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information. Some operating systems include permissions
  • 20. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar management to allow or deny access to files or directories based on file ownership. Many operating systems provide a variety of file systems, sometimes to allow personal choice and sometimes to provide specific features or performance characteristics.  Communications: There are many circumstances in which one process needs to exchange information with another process. Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a computer network. Communications may be implemented via shared memory, in which two or more processes read and write to a shared section of memory, or message passing, in which packets of information in predefined formats are moved between processes by the operating system.  Error detection: The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow, an attempt to access an illegal memory location, or a too great use of CPU time). For each type of error, the operating system should take the appropriate action to ensure correct and consistent computing. Sometimes, it has no choice but to halt the system. At other times, it might terminate an error-causing process or return an error code to a process for the process to detect and possibly correct. Another set of operating system functions exists not for helping the user but rather for ensuring the efficient operation of the system itself. Systems with multiple users can gain efficiency by sharing the computer resources among the users.  Resource allocation: When there are multiple users or multiple jobs running at the same time, resources must be allocated to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more general request and release code. For instance, in determining how best to use the CPU , operating systems have CPU - scheduling routines that take into account the speed of the CPU , the jobs that must be executed, the number of registers available, and other factors. There may also be routines to allocate printers, USB storage drives, and other peripheral devices.  Accounting: We want to keep track of which users use how much and what kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics may be a valuable tool for researchers who wish to reconfigure the system to improve computing services.  Protection and security: The owners of information stored in a multiuser or networked computer system may want to control use of that information. When
  • 21. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar several separate processes execute concurrently, it should not be possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important. Such security starts with requiring each user to authenticate himself or herself to the system, usually by means of a password, to gain access to system resources. 1.10 System calls Types of System calls and their working System calls provide an interface to the services made available by an operating system. These calls are generally available as routines written in C and C+. 1.10.1 How system calls are used: Consider an example of writing a simple program to read data from one file and copy them to another file. -The first input that the program will need is the names of the two files: the input file and the output file. These names can be specified in many ways, depending on the operating system design. One approach is for the program to ask the user for the names. In an interactive system, this approach will require a sequence of system calls, first to write a prompting message on the screen and then to read from the keyboard the characters that define the two files. On mouse-based and icon-based systems, a menu of file names is usually displayed in a window. The user can then use the mouse to select the source name, and a window can be opened for the destination name to be specified. This sequence requires many I/O system calls. -Once the two file names have been obtained, the program must open the input file and create the output file. Each of these operations requires another system call. -When both files are set up, we enter a loop that reads from the input file (a system call) and writes to the output file (another system call). -Finally, after the entire file is copied, the program may close both files (another system call), write a message to the console or window (more system calls), and finally terminate normally (the final system call)
  • 22. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Fig. Example of how system calls are used Many operating systems provides Application Programming Interface(API) which specifies a set of system calls or functions that are available to application programmers. The API calls are directly used by application programmers to develop their applications which hide details of system calls from programmers. Three of the most common API s available to application programmers are: 1. The Windows API for Windows systems. 2. The POSIX API for POSIX -based systems (which include virtually all versions of UNIX , Linux, and Mac OS X ). 3. The Java API for programs that run on the Java virtual machine 1.10.2 An Example of standard API :
  • 23. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar 1.10.3 The run-time support system(System call Interface): The run time support system (a set of functions built into libraries included with a compiler) provides a system call interface that serves as the link to system calls made available by the operating system. The system-call interface intercepts function calls in the API and invokes the necessary system calls within the operating system. Typically, a number is associated with each system call, and the system-call interface maintains a table indexed according to these numbers. The system call interface then invokes the intended system call in the operating system kernel and returns the status of the system call and any return values. The caller need know nothing about how the system call is implemented or what it does during execution. Rather, the caller need only obey the API and understand what the operating system will do as a result of the execution of that system call. Thus, most of the details of the operating-system interface are hidden from the programmer by the API and are managed by the run-time support library. Following figure shows the relationship between an API , the system-call interface, and the
  • 24. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar operating system which illustrates how the operating system handles a user application invoking the open() system call. 1.10.4 Passing parameters to system call: Three general methods are used to pass parameters to the operating system. 1. Pass the parameters in registers. Parameters are accessed much faster in registers, but in some cases meters in registers. In some cases, there may be more parameters than registers. 2. In these cases, the parameters are generally stored in a block, or table, in memory, and the address of the block is passed as a parameter in a register. This is the approach taken by Linux and Solaris. 3. Parameters also can be placed, or pushed, onto the stack by the program and popped off the stack by the operating system. Some operating systems prefer the block or stack method because those approaches do not limit the number or length of parameters being passed.
  • 25. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Fig. Passing of parameters as a table. 1.10.5 Types of System Calls System calls can be grouped roughly into six major categories: 1. Process control 2. File manipulation 3. Device manipulation 4. Information maintenance 5. Communications 6. Protection. Process Control A running program needs to be able to halt its execution either normally ( end() ) or abnormally ( abort() ). If a system call is made to terminate the currently running program abnormally, or if the program runs into a problem and causes an error trap, a dump of memory is sometimes taken and an error message generated. The dump is written to disk and may be examined by a debugger—a system program designed to aid the programmer in finding and correcting errors, or bugs—to determine the cause of the problem. Under either normal or abnormal circumstances, the operating system must transfer control to the invoking command interpreter. The command interpreter then reads the next command. The
  • 26. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar command interpreter then simply continues with the next command. The command interpreter loads the program into memory (load system call) as directed by the user command such as text command or mouse click. Once the program is loaded, operating system starts its execution (execute system call), every program is executed by operating system after creating a separate process for it (create process). We may want to terminate the executing process normally or abnormally by invoking terminate process system call. In order to control the execution processes we require to determine and reset attributes such as priority, maximum allowable execution time, maximum memory allocation and so on (get process attributes and set process attributes). Some time we may need to wait for processes to finish their execution or to wait for certain amount of time (wait time) or to wait for specific event to occur (wait event). Waiting processes get signal when an event occurs so that they wake up and continue the execution (signal event). File Management In most cases user/programmer first need to be able to create() and delete() files. Either system call requires the name of the file and perhaps some of the file’s attributes. Once the file is created, need to open() it and to use it. User may also read() , write() , or reposition() (rewind or skip to the end of the file, for example). Finally, need to close() the file, indicating that we are no longer using it. We may need these same sets of operations for directories if we have a directory structure for organizing files in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes and perhaps to reset them if necessary. File attributes include the file name, file type, protection codes, accounting information, and so on. At least two system calls, get file attributes() and set file attributes() , are required for this function. Some operating systems provide many more calls, such as calls for file move() and copy() . Others might provide an API that performs those operations using code and other system calls, and others might provide system programs to perform those tasks. If the system programs are callable by other programs, then each can be considered an API by other system programs.
  • 27. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar Device Management A process may need several resources to execute —main memory, disk drives, access to files, and so on. If the resources are available, they can be granted, and control can be returned to the user process. Otherwise, the process will have to wait until sufficient resources are available. The various resources controlled by the operating system can be thought of as devices. Some of these devices are physical devices (for example, disk drives), while others can be thought of as abstract or virtual devices (for example, files). A system with multiple users may require us to first request() a device, to ensure exclusive use of it. After we are finished with the device, we release() it. These functions are similar to the open() and close() system calls for files. Other operating systems allow unmanaged access to devices. Once the device has been requested (and allocated to us), we can read() , write() , and (possibly) reposition() the device, just as we can with files. Many operating system e.g UNIX or Linux treat every device as a file. All operations to perform device management are almost similar to the file management. The user interface can also make files and devices appear to be similar even though the underlying system calls are dissimilar. Information Maintenance Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, most systems have a system call to return the current time() and date() . Other system calls may return information about the system, such as the number of current users, the version number of the operating system, the amount of free memory or disk space, and so on. Another set of system calls is helpful in debugging a program. Many systems provide system calls to dump() memory. This provision is useful for debugging. A program trace lists each system call as it is executed. Even microprocessors provide a CPU mode known as single step, in which a trap is executed by the CPU after every instruction. The trap is usually caught by a
  • 28. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar debugger. Many operating systems provide a time profile of a program to indicate the amount of time that the program executes at a particular location or set of locations. A time profile requires either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the value of the program counter is recorded. With sufficiently frequent timer interrupts, a statistical picture of the time spent on various parts of the program can be obtained. In addition, the operating system keeps information about all its processes, and system calls are used to access this information. Generally, calls are also used to reset the process information ( get process attributes() and set process attributes() ). Communication There are two common models of interprocess communication: the message- passing model and the shared-memory model. In the message-passing model, the communicating processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or indirectly through a common mailbox. Before communication can take place, a connection must be opened. The name of the other communicator must be known, be it another process on the same system or a process on another computer connected by a communications network. Each computer in a network has a host name by which it is commonly known. A host also has a network identifier, such as an IP address. Similarly, each process has a process name, and this name is translated into an identifier by which the operating system can refer to the process. The get_hostid() and get_processid() system calls do this translation. The identifiers are then passed to the general- purpose open() and close() calls provided by the file system or to specific open_connection() and close_connection() system calls, depending on the system’s model of communication. The recipient process usually must give its permission for communication to take place with an accept_connection() call. Most processes that will be receiving connections are special-purpose daemons, which are system programs provided for that purpose. They execute a wait_for_connection() call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as a server, then exchange messages by using read_message() and write_message() system calls. The close_connection() call terminates the communication.
  • 29. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar In the shared-memory model, processes use shared memory create() and shared memory attach() system calls to create and gain access to regions of memory owned by other processes. Recall that, normally, the operating system tries to prevent one process from accessing another process’s memory. Shared memory requires that two or more processes agree to remove this restriction. They can then exchange information by reading and writing data in the shared areas. The form of the data is determined by the processes and is not under the operating system’s control. The processes are also responsible for ensuring that they are not writing to the same location simultaneously. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, since it can be done at memory transfer speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory Protection Protection provides a mechanism for controlling access to the resources provided by a computer system. Historically, protection was a concern only on multiprogrammed computer systems with several users. However, with the advent of networking and the Internet, all computer systems, from servers to mobile handheld devices, must be concerned with protection. Typically, system calls providing protection include set permission() and get permission() , which manipulate the permission settings of resources such as files and disks. The allow user() and deny user() system calls specify whether particular users can—or cannot—be allowed access to certain resources.
  • 30. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar
  • 31. Operating System Chapter 1 Introduction to Operating Systems Indira College of Commerce and Science Madhavi Avhankar
  • 32. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar 2. Process Management 2.1 Process Concept – The processes, Process states, Process control block. 2.2 Process Scheduling – Scheduling queues, Schedulers, context switch 2.3 Operations on Process – Process creation with program using fork(), Process termination 2.4 Thread Scheduling- Threads, benefits, Multithreading Models, Thread Libraries 2.1 Process Concepts 2.1.1 Process A process is a program in execution, when loaded into main memory. The process is divided into 4 sections, i.e process consist of 4 sections in main memory: 1. Code Section: This section contains instructions of the process. 2. Data Section: This section contains global and static variable. 3. Heap Section: This section contains the memory allocated during the execution of the program(Dynamic Memory allocation). 4. Stack Section: This section contains function parameters, local variables and return address of the function. 2.1.2 Process states: During the lifetime of the process can be in one the 5 states:
  • 33. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar 1. New: The process is being created. 2. Running: The process is been executing. 3. Waiting: The process is waiting for some event to occurs. 4. Ready: The process is waiting to be assigned to a processor. 5. Terminate: The process has finished execution. The important thing is that only one process can be running in any processor at any time. But many processes may be in ready and waiting states. The ready process is loaded into a “Ready Queue“. Explanation of Process States: New -> Ready: The operating system creates a process and prepares the process to be executed, then the operating system moved the process into “Ready Queue“. Ready -> Running: When it is time to select a process to run. The operating system selects one of the jobs from the ready queue and moves the process from the ready state to the running state. Running -> Terminated: When the execution of a process has been completed, then the OS terminates that process from the running state. Running -> Ready: When the time slot of the processor expired, then the operating system shifted the running process to the ready state. Ready -> Running: When it is time to select a process to run. The OS selects one of the jobs from the ready queue and moves again the process from the ready state to the running state. Running -> Waiting: A process is put into the waiting state. If the process needs an event to occur or an I/O device. The operating system doesn’t provide the I/O device then the process is moved to the waiting state by the operating state.
  • 34. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar Waiting -> Ready: The process in the blocked state is moved to the ready state when the event for which it has been waiting occurs. 2.1.3 Process Control Block(PCB) Every process is represented in the operating system by a process control block, which is also called a task control block. Here, are important components of PCB  Process state: A process can be new, ready, running, waiting, etc.  Program counter: The program counter lets you know the address of the next instruction, which should be executed for that process.  CPU registers: This component includes accumulators, index and general-purpose registers, and information of condition code.  CPU scheduling information: This component includes a process priority, pointers for scheduling queues, and various other scheduling parameters.  Accounting and business information: It includes the amount of CPU and time utilities like real time used, job or process numbers, etc.  Memory-management information: This information includes the value of the base and limit registers, the page, or segment tables. This depends on the memory system, which is used by the operating system.
  • 35. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar  I/O status information: This block includes a list of open files, the list of I/O devices that are allocated to the process, etc. 2.2 Process Scheduling Process scheduling is an important part of multiprogramming operating systems. It is the process of removing the running task from the processor(CPU) and selecting another task for processing. It schedules a process into different states like ready, waiting, and running. 2.2.1 Scheduling Queue: Job Queue: Every process which is created and existed in the system is put into the job queue. i.e Job queue contains PCBs of all the processes in the system. As the process enter the system, it is put into a job queue. Ready Queue: The processes in main memory which are ready and waiting for the allocation of CPU, their PCBs are stored into ready queue. Device Queue: List of process which are waiting for a particular I/O device, their PCB’s are stored into device queue. Every device has its own device queue. Ready Queue Queue header Hea d Tail Process state CPU Register . . . . P1’s PCB Process state CPU Register . . . . Pn’s PCB
  • 36. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar A new process is initially put in the Ready queue. It waits in the ready queue until it is selected for execution(or dispatched). Once the process is assigned to the CPU and is executing, one of the following several events can occur:  The process could issue an I/O request, and then be placed in the I/O queue.  The process could create a new subprocess and wait for its termination.  The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. In the first two cases, the process eventually switches from the waiting state to the ready state, and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated Two State Process Model Two-state process models are:  Running  Not Running Running In the Operating system, whenever a new process is built, it is entered into the system, which should be running. Not Running
  • 37. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar The process that are not running are kept in a queue, which is waiting for their turn to execute. Each entry in the queue is a point to a specific process. 2.2.2 Schedulers Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types −  Long-Term Scheduler  Short-Term Scheduler  Medium-Term Scheduler Long Term Scheduler It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When a process changes the state from new to ready, then there is use of long-term scheduler. Short Term Scheduler It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers. Medium Term Scheduler Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes. A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the
  • 38. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix. Comparison among Scheduler S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler 1 It is a job scheduler It is a CPU scheduler It is a process swapping scheduler. 2 Speed is lesser than short term scheduler Speed is fastest among other two Speed is in between both short and long term scheduler. 3 It controls the degree of multiprogramming It provides lesser control over degree of multiprogramming It reduces the degree of multiprogramming. 4 It is almost absent or minimal in time sharing system It is also minimal in time sharing system It is a part of Time sharing systems. 5 It selects processes from pool and loads them into memory for execution It selects those processes which are ready to execute It can re-introduce the process into memory and execution can be continued. 2.2.3 Context Switch A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block so that a process execution can be resumed from the same point at a later time. Using this technique, a context switcher enables multiple processes to share a single CPU. Context switching is an essential part of a multitasking operating system features. When the scheduler switches the CPU from executing one process to execute another, the state from the current running process is stored into the process control block. After this, the state for the process to run next is loaded from its own PCB and used to set the PC, registers, etc. At that point, the second process can start executing. Context switches are computationally intensive since register and memory state must be saved and restored. To avoid the amount of context switching time, some hardware systems employ two or more sets of processor registers. When the process is switched, the following information is stored for later use.
  • 39. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar  Program Counter  Scheduling information  Base and limit register value  Currently used register  Changed State  I/O State information  Accounting information Context Switching Steps The steps involved in context switching are as follows −  Save the context of the process that is currently running on the CPU. Update the process control block and other important fields.  Move the process control block of the above process into the relevant queue such as the ready queue, I/O queue etc.  Select a new process for execution.  Update the process control block of the selected process. This includes updating the process state to running.  Update the memory management data structures as required.  Restore the context of the process that was previously running when it is loaded again on the processor. This is done by loading the previous values of the process control block and registers. 2.3 Operations on Processes: There are many operations that can be performed on processes. Some of these are process creation, process preemption, process blocking, and process termination. These are given in detail as follows − 2.3.1 Process Creation Processes need to be created in the system for different operations. This can be done by the following events −  User request for process creation  System initialization  Execution of a process creation system call by a running process  Batch job initialization A process may be created by another process using fork(). The creating process is called the parent process and the created process is the child process. A child process can have only one parent but a parent process may have many children. Both the parent and child processes have the same memory image, open files, and environment strings. However, they have distinct address spaces.
  • 40. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar A diagram that demonstrates process creation using fork() is as follows − Let us take the following example: int main() { printf(“Before Forkingn”); fork(); printf(“After Forkingn”); return 0; } If the call to fork() is executed successfully, Linux will • Make two identical copies of address spaces, one for the parent and the other for the child. • Both processes will start their execution at the next statement following the fork() call. Output of above program: Before Forking After Forking After Forking Here printf() statement after fork() system call executed by parent as well as child process. Both processes start their execution right after the system call fork(). Since both processes have identical but separate address spaces, those variables initialized before the fork() call have the same values in both address spaces. Since every process has its own address space, any modifications will be independent of the others. In other words, if the parent changes the value of its variable, the modification will only affect the variable in the parent process's address space. Other address spaces created by fork() calls will not be affected even though they have identical variable names. 2.3.2 Process Preemption An interrupt mechanism is used in preemption that suspends the process executing currently and the next process to execute is determined by the short-term scheduler. Preemption makes sure that all processes get some CPU time for execution.
  • 41. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar A diagram that demonstrates process preemption is as follows − 2.3.3 Process Blocking The process is blocked if it is waiting for some event to occur. This event may be I/O as the I/O events are executed in the main memory and don't require the processor. After the event is complete, the process again goes to the ready state. A diagram that demonstrates process blocking is as follows − 2.3.4 Process Termination After the process has completed the execution of its last instruction, it is terminated. The operating system terminates the process using exit() system call. When process terminate, it returns data to its parent process. The resources like memory, files and I/O devices are de- allocated by operating system. held by a process are released after it is terminated. A child process may be terminated if its parent process requests for its termination. Process terminates usually for following reasons.
  • 42. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar 1. Due to Normal exit: e.g when compiler has compiled the program and task has been finised. 2. Error Exit: e.g user is trying to execute file which not exists. 3. Fatal Error: (Error caused by bug) ee.g referring nonexistent memory or dividing by zero error etc. 4. Killed by another process: The process may kill another process , e.g parent process may kill child process(kill() system call). 2.4 Thread Scheduling 2.4.1 What is Thread? A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that. A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.
  • 43. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar Difference between Process and Thread S.N. Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process. 2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system. 3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes. 4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run. 5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources. 6 In multiple processes each process operates independently of the others. One thread can read, write or change another thread's data. 2.4.2 Benefits of Threads o Enhanced throughput of the system: When the process is split into many threads, and each thread is treated as a job, the number of jobs done in the unit time increases. That is why the throughput of the system also increases. o Effective Utilization of Multiprocessor system: When you have more than one thread in one process, you can schedule more than one thread in more than one processor. o Faster context switch: The context switching period between threads is less than the process context switching. The process context switch means more overhead for the CPU. o Responsiveness: When the process is split into several threads, and when a thread completes its execution, that process can be responded to as soon as possible. o Communication: Multiple-thread communication is simple because the threads share the same address space, while in process, we adopt just a few exclusive communication strategies for communication between two processes.
  • 44. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar o Resource sharing: Resources can be shared between all threads within a process, such as code, data, and files. Note: The stack and register cannot be shared between threads. There is a stack and register for each thread. 2.4.3 Types of Thread Threads are implemented in following two ways −  User Level Threads − User managed threads.  Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core. User Level Threads The threads implemented at the user level are known as user threads. In user level thread, thread management is done by the application. In this case, kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. As kernel is unaware of user level threads, all thread creation, scheduling etc. are done at user space without the need of kernel intervention. Therefore user level threads are generally fast to create and manage. User threads libraries include POSIX PThread, Mach C-thredas and Solaries UI – threads. The application starts with a single thread Advantages of User-level threads 1. The user threads can be easily implemented than the kernel thread. 2. User-level threads can be applied to such types of operating systems that do not support threads at the kernel-level. 3. It is faster and efficient. 4. Context switch time is shorter than the kernel-level threads. 5. It does not require modifications of the operating system. 6. User-level threads representation is very simple. The register, PC, stack, and mini thread control blocks are stored in the address space of the user-level process. 7. It is simple to create, switch, and synchronize threads without the intervention of the process. Disadvantages of User-level threads 1. User-level threads lack coordination between the thread and the kernel. 2. If a thread causes a page fault, the entire process is blocked. Kernel Level Threads In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any
  • 45. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar application can be programmed to be multithreaded. All of the threads within an application are supported within a single process. The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads. As Kernel is managing the threads, if a thread performs a clocking of system call, the kernel can schedule another thread in the application for execution. In multiprocessor environment, the kernel can schedule threads on different processors. Most contemporary operating system- e.g Windows NT, 2000, Solaries 2, BEOS and Tru64 UNIX support kernel threads. Advantages of Kernel-level threads 1. The kernel-level thread is fully aware of all threads. 2. The scheduler may decide to spend more CPU time in the process of threads being large numerical. 3. The kernel-level thread is good for those applications that block the frequency. Disadvantages of Kernel-level threads 1. The kernel thread manages and schedules all threads. 2. The implementation of kernel threads is difficult than the user thread. 3. The kernel-level thread is slower than user-level threads. 2.4.4 Multithreading Models Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads
  • 46. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types  One to one relationship  Many to one relationship.  Many to many relationship. One to one Model: In one-to-one thread model, one to one relationship between a user-level thread to a kernel- level thread. The one-to-one model maps each user thread to a kernel thread. The following fig shows one-to-one model. e.g OS/2, Windows NT and windows 2000 use one-to-one relationship model. Advantages: This model provides more concurrency than the many to one model. It supports multiple threads to execute in parallel on microprocessors. Disadvantages: Each user thread, the kernel thread is required. Creating a kernel thread is overhead. It reduces the performance of the system. Many-to-one Model: The many to one model maps many user level threads to one kernel thread. Thread management is done in user space, so it is efficient, but the entire process will block if a thread makes a blocking system call. Only one thread can access the kernel at a time. Multiple threads are unable to run in parallel on multiprocessors. Green threads a thread library available or solaries 2, uses many-to-one thread model.
  • 47. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar Advantages: One kernel thread controls multiple user threads. It is efficient because thread management is done by thread library. Used in language Many-to-many Model: In this type of model, there are several user-level threads and several kernel-level threads. The number of kernel threads created depends upon a particular application. The developer can create as many threads at both levels but may not be the same. The many to many model is a compromise between the other two models. In this model, if any thread makes a blocking system call, the kernel can schedule another thread for execution. Also, with the introduction of multiple threads, complexity is not present as in the previous models. Though this model allows the creation of multiple kernel threads, true concurrency cannot be achieved by this model. This is because the kernel can schedule only one process at a time.
  • 48. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar 2.4.5 Thread Libraries A thread library provides the programmer an API for creating and managing threads. There are two primary ways of implementing a thread library.  The first approach is to provide a library entirely in user space with no kernel support. All code and data structures for the library exist in user space. This means that invoking a function in the library results in a local function call in user space and not a system call.  The second approach is to implement a kernel-level library supported directly by the OS. In this case, code and data structures for the library exist in kernel space. Invoking a function in the API for the library typically results in a system call to the kernel. Three main thread libraries are in use today:  POSIX Pthreads: Pthreads, the threads extension of the POSIX standard, may be provided as either a user- or kernel-level library. Pthreads library is often implemented at LINUX, UNIX, Solaris, Mac OSX. The Pthread program must always have a pthread.h header file.  Win32: To create a thread using the Win32 library always include windows.h header file in the program. The Win32 thread library is a kernel-level library which means invoking the Win32 library function results in a system call  Java: The Java thread API allows thread creation and management directly in Java programs. However, because in most instances the JVM is running on top of a host OS,
  • 49. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar the Java thread API is typically implemented using a thread library available on the host system Process vs Thread: Process Thread 1. Process can't share the memory. 1. Threads can share memory and files. 2. In process, execution is very slow. 2. In thread, execution is very fast. 3. It takes more time to create a process. 3. It takes less time to create a process. 4. It takes more time to complete the execution and terminate. 4. It takes less time to complete the execution and terminate. 5. Process is loosely coupled. 5. Theads are tightly coupled. 6. Processes are not suitable for parallel activities. 6. Threads are suitable for parallel activities. 7. System calls are required to communicate each other. 7. System calls are not required. 8.Implementing the communication between processes is difficult. 8. Communication between two threads are very easy. 9.Process is heavy weight or resource intensive. 9.Thread is light weight, taking lesser resources than a process. 10.Process switching needs interaction with operating system. 10.Thread switching does not need to interact with operating system. 11.In multiple processing environments, each process executes the same code but has its own memory and file resources. 11.All threads can share same set of open files, child processes. 12.If one process is blocked, then no other process can execute until the first process is unblocked. 12.While one thread is blocked and waiting, a second thread in the same task can run. 13.Multiple processes without using threads use more resources. 13.Multiple threaded processes use fewer resources. 14.In multiple processes each process operates independently of the others. 15.One thread can read, write or change another thread's data.
  • 50. Operating System Chapter 2. Process Management Indira College of Commerce and Science Madhavi Avhankar
  • 51. Operating System Chapter 3 Process Scheduling Indira College of Commerce and Science Madhavi Avhankar Chapter 3 Process Scheduling 3.1 Basic Concept – CPU-I/O burst cycle, Scheduling Criteria ,CPU scheduler, Preemptive scheduling, Dispatcher 3.2 Scheduling Algorithms – FCFS, SJF, Priority scheduling, Round-robin scheduling, Multiple queue scheduling, Multilevel feedback queue scheduling 3.1 Basic Concept CPU Scheduling CPU scheduling is the basis of multiprogramming operating systems. The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use. 3.1.1 CPU-I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. Mostly, A CPU burst of performing calculations, and an I/O burst, waiting for data transfer in or out of the system. Fig. Alternate sequence of CPU and I/O bursts
  • 52. Operating System Chapter 3 Process Scheduling Indira College of Commerce and Science Madhavi Avhankar 3.1.2 Scheduling Criteria Scheduling Criteria Many criteria have been suggested for comparing CPU-scheduling algorithms. The criteria include the following:  CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system).  Throughput: If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes completed per time unit, called throughput. For long processes, this rate may be 1 process per hour; for short transactions, throughput might be 10 processes per second.  Turnaround time: The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.  Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue.  Response time: In an interactive system, turnaround time may not be the best criterion. Another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the amount of time it takes to start responding, but not the time that it takes to output that response. It is desirable to maximize CPU utilization and throughput and to minimize turnaround time, waiting time, and response time. Optimization Criteria ■ Max CPU utilization ■ Max throughput ■ Min turnaround time ■ Min waiting time ■ Min response time 3.1.3 CPU Scheduler Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler). The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a FIFO queue, a priority queue, a tree, or simply an unordered linked list. Preemptive Scheduling CPU scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state 2. When a process switches from the running state to the ready state 3. When a process switches from the waiting state to the ready state 4. When a process terminates
  • 53. Operating System Chapter 3 Process Scheduling Indira College of Commerce and Science Madhavi Avhankar Under 1 & 4 scheduling scheme is non preemptive. Otherwise the scheduling scheme is preemptive. Under non preemptive scheduling, once the CPU has been allocated a process, the process keeps the CPU until it releases the CPU either by termination or by switching to the waiting state. This scheduling method is used by the Microsoft windows environment. 3.1.4 Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: ✦switching context ✦switching to user mode ✦jumping to the proper location in the user program to restart that program ■ Dispatch latency – time it takes for the dispatcher to stop one process and start another running. 3.2 Scheduling Algorithms Scheduling algorithms are used to solve the problem of deciding the set of the processes in the ready queue that has to be allocated the CPU time. In simple terms, scheduling
  • 54. Operating System Chapter 3 Process Scheduling Indira College of Commerce and Science Madhavi Avhankar algorithms are used to schedule OS process on CPU processor time. Types or list of scheduling algorithms are:  First Come First Served (FCFS) Scheduling  Shortest Job First(SJF) Scheduling  Priority Scheduling  Round Robin Scheduling  Multilevel Queue Scheduling
  • 55. Operating System Chapter 4 Introduction to Distributed OS and Architecture Indira College of Commerce and Science Madhavi Avhankar Distribution System definition: General structure of Distributed System From the point of view of a specific processor in a distributed system, the rest of the processors and their respective resources are remote, whereas its own resources are local. The processors in a distributed system may vary in function and size. They may include small microprocessors, workstations, minicomputers and large general purpose computer systems. These processors are referred to by a number of names, such as sites, nodes, computers, machines and hosts, depending on the context in which they are mentioned. Site word is to indicate the location of a machine and host to refer to a specific system at a site. Generally, one host at one site, the server, has a resource that another host at another site, the client (or user) would like to use. Any implementation of a distributed computing model(an abstract view of a system) must involve the implementation of processes, message links, routing schemes and timings. The main purpose of the Distributed system is to enable users to access long – distances resources and share the resources like a text, a picture, a voice, a video and so on with other users in controlled way. A distributed system in its simplest definition is a group of computers working together as to appear as a single computer to the end-user. A distributed computing system is a collection of processors interconnected by a communication network in which each processor has its own local memory and other peripherals, and the communication between any two processors of the system takes place by message passing over the communication network. For a particular processor, its own resources are local, whereas the other processors and their resources are remote A distributed operating system is one that looks to its users like an ordinary centralized operating system but runs on multiple, independent central processing units (CPUs). The key concept here is transparency. In other words, the use of multiple processors should be invisible (transparent) to the user. Another way of expressing the same idea is to say that the user views the system as a "virtual uniprocessor," not as a collection of distinct machines
  • 56. Operating System Chapter 4 Introduction to Distributed OS and Architecture Indira College of Commerce and Science Madhavi Avhankar Advantages of Distributed Systems Data/Resource sharing The distributed system enables a component to share data easily with other components of the system. This is possible due to the fact that in a distributed system, nodes are interconnected for collaboration purposes. Scalability Scalability means that we can change the size and extent of a particular system. Distributed Systems provide unmatched scalability as we can easily add more nodes in a particular network. Failure handling A distributed system doesn’t depend on a single node. So, even if there is a single node malfunctioning, other nodes continue to function properly. Thus, the system is intact. Reliability For a system to be reliable, it should handle errors efficiently. As distributed systems easily handle system crashes, they are quite reliable. Efficiency Distributed systems are highly efficient as they involve multiple computers that save time for users. Also, they can provide higher performance as compared to centralized systems. Lesser delay In today’s world, time is an important constraint. Distributed Systems provide a low latency rate. For example, consider a user who uses the internet and loads a website. The system makes sure that the node located closer to the user is used to perform the loading task in order to save time. Disadvantages of Distributed Systems Security issue Security issues usually occur in many software and hardware devices. The same case is with Distributed Systems. Such security risks occur as a result of many nodes and connections in an open system setting that makes it difficult to ensure adequate security. High set-up cost The initial cost of installation and set-up is high due to many hardware and software devices. There are other maintenance costs associated with the system which adds to the total cost, making it even more expensive. Data loss There can be instances when the data sent from one node to another node can be lost midway in its journey from the source node to the destination node. Difficult to handle The hardware and software of a distributed system are quite complex. It’s complicated to maintain and operate the hardware components . Also, software complexity makes it necessary to pay special attention to the software components. Overloading issue The Overloading issue can occur in the system if all the nodes of the distributed system try to send data at one particular instant of time. Design goals of Distributed system: 1.Making Resources Accessible:- The main goal of a distributed system is to make it easy for the users (and applications) to access remote resources, and to share them in a controlled and efficient way. Resources - typical examples include things like printers, computers, storage facilities, data, files, Web pages, and networks etc. There are many reasons to the share resources.
  • 57. Operating System Chapter 4 Introduction to Distributed OS and Architecture Indira College of Commerce and Science Madhavi Avhankar One obvious reason is that of economics. For example, it is cheaper to let a printer be shared by several users in a small office than having to buy and maintain a separate printer for each user. Likewise, it makes economic sense to share costly resources such as supercomputers, high-performance storage systems, image setters, and other expensive peripherals. 2. Transparency : One of the main goals of a distributed operating system is to make the existence of multiple computers invisible (transparent) and provide a single system image to its users. That is, a distributed operating system must be designed in such a way that a collection of distinct machines connected by a communication subsystem appears to its users as a virtual uniprocessor. There are seven forms of transparency of distributed operating system – 2.1 Access Transparency: Access transparency means that users should not need or be able to recognize whether a resource (hardware or software) is remote or local. This implies that the distributed operating system should allow users to access remote resources in the same way as local resources. That is, the user interface, which takes the form of a set of system calls, should not distinguish between local and remote resources, and it should be the responsibility of the distributed operating system to locate the resources and to arrange for servicing user requests in a user-transparent manner 2.2 Location Transparency: The two main aspects of location transparency are as follows: 1. Name transparency: This refers to the fact that the name of a resource (hardware or software) should not reveal any hint as to the physical location of the resource. That is, the name of a resource should be independent of the physical connectivity or topology of the system or the current location of the resource. Furthermore, such resources, which are capable of being moved from one node to another in a distributed system (such as a file), must be allowed to move without having their names changed. Therefore, resource names must be unique system wide. 2. User mobility: This refers to the fact that no matter which machine a user is logged onto, he or she should be able to access a resource with the same name. That is, the user should not be required to use different names to access the same resource from two different nodes of the system. In a distributed system that supports user mobility, users can freely log on to any machine in the system and access any resource without making any extra effort. 2.3 Replication Transparency: 1. For better performance and reliability, almost all distributed operating systems have the provision to create replicas (additional copies) of files and other resources on different nodes of the distributed system. In these systems, both the existence of multiple copies of a replicated resource and the replication activity should be transparent to the users. 2. That is, two 'important issues related to replication transparency are naming of replicas and replication control. 3. It is the responsibility of the system to name the various copies of a resource and to map a user-supplied name of the resource to an appropriate replica of the resource. 4. Furthermore, replication control decisions such as how many copies of the resource should be created, where should each copy be placed, and when should a copy be created/deleted should be made entirely automatically by the system in a user-transparent manner 2.4 Failure Transparency: 1. Failure transparency deals with masking from the users' partial failures in the system, such as a communication link failure, a machine failure, or a storage device crash etc.