An operating system acts as an intermediary between the user of a computer and computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner.
2. 2
Unit - I
An operating system acts as an intermediary between the user of a computer and
computer hardware. The purpose of an operating system is to provide an
environment in which a user can execute programs in a convenient and efficient
manner.
An operating system is software that manages the computer hardware. The
hardware must provide appropriate mechanisms to ensure the correct operation of
the computer system and to prevent user programs from interfering with the proper
operation of the system.
Operating System – Definition:
An operating system is a program that controls the execution of application
programs and acts as an interface between the user of a computer and the
computer hardware.
A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
being application programs.
An operating system is concerned with the allocation of resources and
services, such as memory, processors, devices and information. The operating
system correspondingly includes programs to manage these resources, such as
3. 3
a traffic controller, a scheduler, memory management module, I/O programs,
and a file system.
Operating system as User Interface
1. User
2. System and application programs
3. Operating system
4. Hardware
Every general purpose computer consists of the hardware, operating system,
system programs, and application programs. The hardware consists of memory,
CPU, ALU, and I/O devices, peripheral device and storage device. System
program consists of compilers, loaders, editors, OS etc. The application program
consists of business programs, database programs.
Fig1: Conceptual view of a computer system
4. 4
Every computer must have an operating system to run other programs. The
operating system coordinates the use of the hardware among the various system
programs and application programs for a various users. It simply provides an
environment within which other programs can do useful work.
The operating system is a set of special programs that run on a computer system
that allows it to work properly. It performs basic tasks such as recognizing input
from the keyboard, keeping track of files and directories on the disk, sending
output to the display screen and controlling peripheral devices.
OS is designed to serve two basic purposes:
1. It controls the allocation and use of the computing System’s resources
among the various user and tasks.
2. It provides an interface between the computer hardware and the programmer
that simplifies and makes feasible for coding, creation, debugging of
application programs.
The Operating system must support the following tasks. The task are:
1. Provides the facilities to create, modification of programs and data files
using an editor.
2. Access to the compiler for translating the user program from high level
language to machine language.
3. Provide a loader program to move the compiled program code to the
computer’s memory for execution.
4. Provide routines that handle the details of I/O programming.
Goal of an Operating System:
The fundamental goal of a Computer System is to execute user programs and
5. 5
to make tasks easier. Various application programs along with hardware system are
used to perform this work.
Operating System is a software which manages and control the entire set of
resources and effectively utilize every part of a computer.
The figure shows how OS acts as a medium between hardware unit and application
programs.
Need of Operating System:
OS as a platform for Application programs:
Operating system provides a platform, on top of which, other
programs, called application programs can run. These application programs
help the users to perform a specific task easily. It acts as an interface between
the computer and the user. It is designed in such a manner that it operates,
controls and executes various applications on the computer.
Managing Input-Output unit
Operating System also allows the computer to manage its own
resources such as memory, monitor, keyboard, printer etc. Management of these
resources is required for an effective utilization. The operating system controls
the various system input-output resources and allocates them to the users or
programs as per their requirement.
Consistent user interface
Operating System provides the user an easy-to-work user interface,
so the user doesn’t have to learn a different UI every time and can focus on
the content and be productive as quickly as possible. Operating System
provides templates, UI components to make the working of a computer, really
easy for the user.
6. 6
Multitasking
Operating System manages memory and allow multiple programs
to run in their own space and even communicate with each other through
shared memory. Multitasking gives users a good experience as they can
perform several tasks on a computer at a time.
o OS User Interface
o One set of operating-system services provides functions that are
helpful to the user:
o User interface - Almost all operating systems have a user interface
(UI)
Varies between Command-Line (CLI), Graphics User Interface
(GUI), Batch
o Program execution - The system must be able to load a program into
memory and to run that program, end execution, either normally or
abnormally (indicating error)
o I/O operations - A running program may require I/O, which may
involve a file or an I/O device.
o File-system manipulation - The file system is of particular interest.
Obviously, programs need to read and write files and directories,
create and delete them, search them, list file Information, permission
management.
o Communications – Processes may exchange information, on the same
computer or between computers over a network
Communications may be via shared memory or through
message passing (packets moved by the OS)
o Error detection – OS needs to be constantly aware of possible errors
7. 7
May occur in the CPU and memory hardware, in I/O devices, in
user program
For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system
Another set of OS functions exists for ensuring the efficient operation of the
system itself via resource sharing
o Resource allocation - When multiple users or multiple jobs running
concurrently, resources must be allocated to each of them
Many types of resources - Some (such as CPU
cycles,mainmemory, and file storage) may have special
allocation code, others (such as I/O devices) may have general
request and release code.
o Accounting - To keep track of which users use how much and what
kinds of computer resources
o Protection and security - The owners of information stored in a
multiuser or networked computer system may want to control use of
that information, concurrent processes should not interfere with each
other
Protection involves ensuring that all access to system resources
is controlled
Security of the system from outsiders requires user
authentication, extends to defending external I/O devices from
invalid access attempts
If a system is to be protected and secure, precautions must be
instituted throughout it. A chain is only as strong as its weakest
link.
Command Line Interface allows direct command entry
Sometimes implemented in kernel, sometimes by systems
program
Sometimes multiple flavors implemented – shells
Primarily fetches a command from user and executes it
8. 8
Sometimes commands built-in, sometimes just names of
programs
o If the latter, adding new features doesn’t require
shell modification
User-friendly desktop metaphor interface
o Usually mouse, keyboard, and monitor
o Icons represent files, programs, actions, etc
o Various mouse buttons over objects in the interface cause various
actions (provide information, options, execute function, open
directory (known as a folder)
o Invented at Xerox PARC
Many systems now include both CLI and GUI interfaces
o Microsoft Windows is GUI with CLI “command” shell
o Apple Mac OS X as “Aqua” GUI interface with UNIX kernel
underneath and shells available
o Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)
Functions of an Operating System
An operating system has variety of functions to perform. Some of the prominent
functions of an operating system can be broadly outlined as:
Processor Management:
This deals with management of the Central Processing Unit (CPU). The
operating system takes care of the allotment of CPU time to different processes.
When a process finishes its CPU processing after executing for the allotted time
period, this is called scheduling. There are various type of scheduling
techniques that are used by the operating systems:
1. Shortest Job First(SJF): Process which need the shortest CPU time
are scheduled first.
9. 9
2. Round Robin Scheduling: Each process is assigned a fixed CPU
execution time in cyclic way.
3. Priority Based scheduling (Non Preemptive): In this scheduling,
processes are scheduled according to their priorities, i.e., highest priority
process is schedule first. If priorities of two processes match, then
schedule according to arrival time.
Device Management:
The Operating System communicates with hardware and the
attached devices and maintains a balance between them and the CPU. This is
all the more important because the CPU processing speed is much higher than
that of I/O devices. In order to optimize the CPU time, the operating system
employs two techniques – Buffering and Spooling.
i.Buffering:
In this technique, input and output data is temporarily stored in Input
Buffer and Output Buffer. Once the signal for input or output is sent to or
from the CPU respectively, the operating system through the device controller
moves the data from the input device to the input buffer and for the output
device to the output buffer. In case of input, if the buffer is full, the operating
system sends a signal to the program which processes the data stored in the
buffer. When the buffer becomes empty, the program informs the operating
system which reloads the buffer and the input operation continues.
ii.Spooling (Simultaneous Peripheral Operationon Line)
10. 10
This is a device management technique used for processing of
different tasks on the same input/output device. When there are various
users on a network sharing the same resource then it can be a possibility that
more than one user might give it a command at the same point of time. So, the
operating system temporarily stores the data of every user on the hard disk of
the computer to which the resource is attached. The individual user need not
wait for the execution process to be completed. Instead the operating system
sends the data from the hard disk to the resource one by one.
Example: printer
Memory management
In a computer, both the CPU and the I/O devices interact with the
memory. When a program needs to be executed it is loaded onto the main
memory till the execution is completed. Thereafter that memory space is freed
and is available for other programs. The common memory management
techniques used by the operating system are Partitioning and Virtual Memory.
i.Partitioning:
The total memory is divided into various partitions of same size
or different sizes. This helps to accommodate number of programs in the
memory. The partition can be fixed i.e. remains same for all the programs in
the memory or variable i.e. memory is allocated when a program is loaded on
to the memory. The later approach causes less wastage of memory but in due
course of time, it may become fragmented.
ii.VirtualMemory:
This is a technique used by the operating systems which allow
the user can load the programs which are larger than the main memory of the
11. 11
computer. In this technique the program is executed even if the complete
program cannot be loaded inside the main memory leading to efficient
memory utilization.
File Management:
The operating System manages the files, folders and directory systems on a
computer. Any data on a computer is stored in the form of files and the
operating system keeps information about all of them using File Allocation
Table (FAT). The FAT stores general information about files like filename,
type (text or binary), size, starting address and access mode
(sequential/indexed sequential/direct/relative). The file manager of the
operating system helps to create, edit, copy, allocate memory to the files and
also updates the FAT. The operating system also takes care that files are
opened with proper access rights to read or edit them.
Evolution of Operating Systems
The evolution of operating systems is directly dependent on the development of
computer systems and how users use them. Here is a quick tour of computing
systems through the past fifty years in the timeline.
Early Evolution
1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
1949: EDSAC and EDVAC
1949: BINAC - a successor to the ENIAC
1951: UNIVAC by Remington
1952: IBM 701
1956: The interrupt
12. 12
1954-1957: FORTRAN was developed
Operating Systems - Late 1950s
By the late 1950s Operating systems were well improved and started supporting
following usages:
It was able to perform Single stream batch processing.
It could use Common, standardized, input/output routines for device access.
Program transition capabilities to reduce the overhead of starting a new job
was added.
Error recovery to clean up after a job terminated abnormally was added.
Job control languages that allowed users to specify the job definition and
resource requirements were made possible.
Operating Systems - In 1960s
1961: The dawn of minicomputers
1962: Compatible Time-Sharing System (CTSS) from MIT
1963: Burroughs Master Control Program (MCP) for the B5000 system
1964: IBM System/360
1960s: Disks became mainstream
1966: Minicomputers got cheaper, more powerful, and really useful.
1967-1968: Mouse was invented.
1964 and onward: Multics
13. 13
1969: The UNIX Time-Sharing System from Bell Telephone Laboratories.
Supported OS Features by 1970s
Multi User and Multi tasking was introduced.
Dynamic address translation hardware and Virtual machines came into
picture.
Modular architectures came into existence.
Personal, interactive systems came into existence.
Accomplishments after 1970
1971: Intel announces the microprocessor
1972: IBM comes out with VM: the Virtual Machine Operating System
1973: UNIX 4th Edition is published
1973: Ethernet
1974 The Personal Computer Age begins
1974: Gates and Allen wrote BASIC for the Altair
1976: Apple II
August 12, 1981: IBM introduces the IBM PC
1983 Microsoft begins work on MS-Windows
1984 Apple Macintosh comes out
1990 Microsoft Windows 3.0 comes out
1991 GNU/Linux
14. 14
1992 The first Windows virus comes out
1993 Windows NT
2007: iOS
2008: Android OS
And as the research and development work continues, we are seeing new operating
systems being developed and existing ones getting improved and modified to
enhance the overall user experience, making operating systems fast and efficient
like never before.
Also, with the onset of new devies like wearables, which includes, Smart
Watches, SmartGlasses, VR gears etc, the demand for unconventional operating
systems is also rising.
Types of Operating Systems
Following are some of the most widely used types of Operating system.
1. Simple BatchSystem
2. Multiprogramming Batch System
3. MultiprocessorSystem
4. DesktopSystem
5. Distributed Operating System
6. ClusteredSystem
7. Realtime Operating System
8. Handheld System
Simple Batch Systems
15. 15
In this type of system, there is no direct interaction between user and the
computer.
The user has to submit a job (written on cards or tape) to a computer
operator.
Then computer operator places a batch of several jobs on an input device.
Jobs are batched together by type of languages and requirement.
Then a special program, the monitor, manages the execution of each
program in the batch.
The monitor is always in the main memory and available for execution.
Advantages of Simple Batch Systems
1. No interaction between user and computer.
2. No mechanism to prioritise the processes.
Multiprogramming Batch Systems
In this the operating system picks up and begins to execute one of the jobs
from memory.
16. 16
Once this job needs an I/O operation ,operating system switches to another
job (CPU and OS always busy).
Jobs in the memory are always less than the number of jobs on disk(Job
Pool).
If several jobs are ready to run at the same time, then the system chooses
which one to run through the process of CPU Scheduling.
In Non-multiprogrammed system, there are moments when CPU sits idle
and does not do any work.
In Multiprogramming system, CPU will never be idle and keeps on
processing.
Time Sharing Systems are very similar to Multiprogramming batch systems. In
fact time sharing systems are an extension of multiprogramming systems.
In Time sharing systems the prime focus is on minimizing the response time,
while in multiprogramming the prime focus is to maximize the CPU usage.
Multiprocessor Systems
A Multiprocessor system consists of several processors that share a common
physical memory. Multiprocessor system provides higher computing power and
17. 17
speed. In multiprocessor system all processors operate under single operating
system. Multiplicity of the processors and how they do act together are transparent
to the others.
Advantages of Multiprocessor Systems
1. Enhanced performance
2. Execution of several tasks by different processorsconcurrently, increases the
system's throughput without speeding up the execution of a single task.
3. If possible, system divides task into many subtasks and then these subtasks
can be executed in parallel in different processors. Thereby speeding up the
execution of single tasks.
Desktop Systems
Earlier, CPUs and PCs lacked the features needed to protect an operating system
from user programs. PC operating systems therefore were
neither multiuser nor multitasking. However, the goals of these operating
systems have changed with time; instead of maximizing CPU and peripheral
utilization, the systems opt for maximizing user convenience and responsiveness.
These systems are called Desktop Systems and include PCs running Microsoft
Windows and the Apple Macintosh. Operating systems for these computers have
benefited in several ways from the development of operating systems
for mainframes.
Microcomputers were immediately able to adopt some of the technology
developed for larger operating systems. On the other hand, the hardware costs for
18. 18
microcomputers are sufficiently low that individuals have sole use of the computer,
and CPU utilization is no longer a prime concern. Thus, some of the design
decisions made in operating systems for mainframes may not be appropriate for
smaller systems.
Distributed Operating System
The motivation behind developing distributed operating systems is the availability
of powerful and inexpensive microprocessors and advances in communication
technology.
These advancements in technology have made it possible to design and develop
distributed systems comprising of many computers that are inter connected by
communication networks. The main benefit of distributed systems is its low
price/performance ratio.
Advantages Distributed Operating System
1. As there are multiple systems involved, user at one site can utilize the
resources of systems at other sites for resource-intensive tasks.
2. Fast processing.
3. Less load on the Host Machine.
Types of Distributed Operating Systems
Following are the two types of distributed operating systems used:
19. 19
1. Client-Server Systems
2. Peer-to-Peer Systems
Client-Server Systems
Centralized systems today act as server systems to satisfy requests generated
by client systems. The general structure of a client-server system is depicted in the
figure below:
Server Systems can be broadly categorized as: Compute Servers and File
Servers.
Compute Server systems, provide an interface to which clients can send
requests to perform an action, in response to which they execute the action and
send back results to the client.
File Server systems, provide a file-system interface where clients can
create, update, read, and delete files.
Peer-to-Peer Systems
The growth of computer networks - especially the Internet and World Wide Web
(WWW) – has had a profound influence on the recent development of operating
20. 20
systems. When PCs were introduced in the 1970s, they were designed
for personal use and were generally considered standalone computers. With the
beginning of widespread public use of the Internet in the 1990s for electronic mail
and FTP, many PCs became connected to computer networks.
In contrast to the Tightly Coupled systems, the computer networks used in these
applications consist of a collection of processors that do not share memory or a
clock. Instead, each processor has its own local memory. The processors
communicate with one another through various communication lines, such as high-
speed buses or telephone lines. These systems are usually referred to as loosely
coupled systems ( or distributed systems). The general structure of a client-server
system is depicted in the figure below:
Clustered Systems
Like parallel systems, clustered systems gather together multiple CPUs to
accomplish computational work.
Clustered systems differ from parallel systems, however, in that they are
composed of two or more individual systems coupled together.
21. 21
The definition of the term clustered is not concrete; the general accepted
definition is that clustered computers share storage and are closely linked via
LAN networking.
Clustering is usually performed to provide high availability.
A layer of cluster software runs on the cluster nodes. Each node can monitor
one or more of the others. If the monitored machine fails, the monitoring
machine can take ownership of its storage, and restart the application(s) that
were running on the failed machine. The failed machine can remain down, but
the users and clients of the application would only see a brief interruption of
service.
Asymmetric Clustering - In this, one machine is in hot standby mode while
the other is running the applications. The hot standby host (machine) does
nothing but monitor the active server. If that server fails, the hot standby host
becomes the active server.
Symmetric Clustering - In this, two or more hosts are running applications,
and they are monitoring each other. This mode is obviously more efficient, as it
uses all of the available hardware.
Parallel Clustering - Parallel clusters allow multiple hosts to access the
same data on the shared storage. Because most operating systems lack support
for this simultaneous data access by multiple hosts, parallel clusters are usually
accomplished by special versions of software and special releases of
applications.
Clustered technology is rapidly changing. Clustered system's usage and it's features
should expand greatly as Storage Area Networks(SANs). SANs allow easy
attachment of multiple hosts to multiple storage units. Current clusters are usually
22. 22
limited to two or four hosts due to the complexity of connecting the hosts to shared
storage.
Real Time Operating System
It is defined as an operating system known to give maximum time for each of the
critical operations that it performs, like OS calls and interrupt handling.
The Real-Time Operating system which guarantees the maximum time for critical
operations and complete them on time are referred to as Hard Real-Time
Operating Systems.
While the real-time operating systems that can only guarantee a maximum of the
time, i.e. the critical task will get priority over other tasks, but no assurity of
completeing it in a defined time. These systems are referred to as Soft Real-Time
Operating Systems.
Handheld Systems
Handheld systems include Personal Digital Assistants(PDAs), such as Palm-
Pilots or Cellular Telephones with connectivity to a network such as the Internet.
They are usually of limited size due to which most handheld devices have a small
amount of memory, include slow processors, and feature small display screens.
Many handheld devices have between 512 KB and 8 MB of memory. As a
result, the operating system and applications must manage memory efficiently.
23. 23
This includes returning all allocated memory back to the memory manager once
the memory is no longer being used.
Currently, many handheld devices do not use virtual memory techniques,
thus forcing program developers to work within the confines of limited physical
memory.
Processors formost handheld devices often run at a fraction of the speed of a
processor in a PC. Faster processors require more power. To include a faster
processor in a handheld device would require a larger battery that would have
to be replaced more frequently.
The last issue confronting program designers for handheld devices is the
small display screens typically available. One approach for displaying the
content in web pages is web clipping, where only a small subset of a web page
is delivered and displayed on the handheld device.
Some handheld devices may use wireless technology such as BlueTooth, allowing
remote access to e-mail and web browsing. Cellular telephones with connectivity
to the Internet fall into this category. Their use continues to expand as network
connections become more available and other options such as cameras and MP3
players, expand their utility.
24. 24
See Last Minute Notes for Operating System.
Operating Systems: It is the interface between the user and the computer hardware.
Types of Operating System (OS):
1. Batch OS
A set of similar jobs are stored in the main memory for execution. A job gets
assigned to the CPU, only when the execution of the previous job completes.
2. Multiprogramming OS
The main memory consists of jobs waiting for CPU time. The OS selects one
of the processes and assigns it to the CPU. Whenever the executing process
needs to wait for any other operation (like I/O), the OS selects another process
from the job queue and assigns it to the CPU. This way, the CPU is never kept
idle and the user gets the flavor of getting multiple tasks done at once.
3. Multitasking OS
Multitasking OS combines the benefits of Multiprogramming OS and CPU
scheduling to perform quick switches between jobs. The switch is so quick
that the user can interact with each program as it runs
4. Time Sharing OS
Time sharing systems require interaction with the user to instruct the OS to
25. 25
perform various tasks. The OS responds with an output. The instructions are
usually given through an input device like the keyboard.
5. Real Time OS
Real Time OS are usually built for dedicated systems to accomplish a specific
set of tasks within deadlines.
Memory management is the functionality of an operating system
which handles or manages primary memory and moves processes back and
forth between main memory and disk during execution.
Memory management keeps track of each and every memory
location, regardless of either it is allocated to some process or it is free. It
checks how much memory is to be allocated to processes. It decides which
process will get memory at what time. It tracks whenever some memory gets
freed or unallocated and correspondingly it updates the status.
Staticvs Dynamic Loading
The choice between Static or Dynamic Loading is to be made at the time of
computer program being developed. If you have to load your program
statically, then at the time of compilation, the complete programs will be
compiled and linked without leaving any external program or module
dependency. The linker combines the object program with other necessary
object modules into an absolute program, which also includes logical
addresses.
If you are writing a Dynamically loaded program, then your compiler will
compile the program and for all the modules which you want to include
dynamically, only references will be provided and rest of the work will be
done at the time of execution.
At the time of loading, with static loading, the absolute program (and
data) is loaded into memory in order for execution to start.
26. 26
If you are using dynamic loading, dynamic routines of the library are
stored on a disk in relocatable form and are loaded into memory only when
they are needed by the program.
Static vs Dynamic Linking
As explained above, when static linking is used, the linker combines all
other modules needed by a program into a single executable program to
avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or
library with the program, rather a reference to the dynamic module is
provided at the time of compilation and linking. Dynamic Link Libraries
(DLL) in Windows and Shared Objects in Unix are good examples of
dynamic libraries.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that
memory available to other processes. At some later time, the system swaps
back the process from the secondary storage to main memory.
27. 27
Though performance is usually affected by swapping process but it helps in
running multiple and big processes in parallel and that's the
reason Swapping is also known as a technique for memory
compaction.
The total time taken by swapping process includes the time it takes to move
the entire process to a secondary disk and then to copy the process back to
memory, as well as the time the process takes to regain main memory.
Let us assume that the user process is of size 2048KB and on a standard
hard disk where swapping will take place has a data transfer rate around 1
MB per second. The actual transfer of the 1000K process to or from
memory will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds
plus other overhead where the process competes to regain main memory.
28. 28
Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that processes
cannot be allocated to memory blocks considering their small size and
memory blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types −
S.N. Fragmentation & Description
1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in
it, but it is not contiguous, so it cannot be used.
2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is
left unused, as it cannot be used by another process.
The following diagram shows how fragmentation can cause waste of
memory and a compaction technique can be used to create more free
memory out of fragmented memory −
External fragmentation can be reduced by compaction or shuffle memory
contents to place all free memory together in one large block. To make
compaction feasible, relocation should be dynamic.
29. 29
The internal fragmentation can be reduced by effectively assigning the
smallest partition but large enough for the process.
Paging
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory and it is
a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.
Paging is a memory management technique in which process address space
is broken into blocks of the same size called pages (size is power of 2,
between 512 bytes and 8192 bytes). The size of the process is measured in
the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical)
memory called frames and the size of a frame is kept the same as that of a
page to have optimum utilization of the main memory and to avoid external
fragmentation.
Segmentation
Segmentation is a memory management technique in which each job is
divided into several segments of different sizes, one for each module that
30. 30
contains pieces that perform related functions. Each segment is actually a
different logical address space of the program.
When a process is to be executed, its corresponding segmentation are
loaded into non-contiguous memory though every segment is loaded into a
contiguous block of available memory.
Segmentation memory management works very similar to paging but here
segments are of variable-length where as in paging pages are of fixed
size.
A program segment contains the program's main function, utility functions,
data structures, and so on. The operating system maintains a segment
map table for every process and a list of free memory blocks along with
segment numbers, their size and corresponding memory locations in main
memory. For each segment, the table stores the starting address of the
segment and the length of the segment. A reference to a memory location
includes a value that identifies a segment and an offset.
Requirements of memory management system
Memory management keeps track of the status of each memory
location, whether it is allocated or free. It allocates the memory dynamically to the
31. 31
programs at their request and free it for reuse when it is no longer needed. Memory
management meant to satisfy some requirements that we should keep in mind.
These Requirements of memory management are:
1. Relocation – The available memory is generally shared among a number of
processes in a multiprogramming system, so it is not possible to know in
advance which other programs will be resident in main memory at the time of
execution of his program. Swapping the active processes in and out of the
main memory enables the operating system to have a larger pool of ready-to-
execute process.
When a program gets swapped out to disk memory, then it is not always
possible that when it is swapped back into main memory then it occupies the
previous memory location, since the location may still be occupied by another
process. We may need to relocate the process to a different area of memory.
Thus there is a possibility that program may be moved in main memory due to
swapping.
32. 32
The figure depicts a process image. The process image is occupying a
continuous region of main memory. The operating system will need to know
many things including the location of process control information, the
execution stack and the code entry. Within a program, there are memory
references in various instructions and these are called logical addresses.
After loading of the program into main memory, the processor
and the operating system must be able to translate logical addresses into
physical addresses. Branch instructions contain the address of the next
instruction to be executed. Data reference instructions contain the address of
byte or word of data referenced.
2. Protection – There is always a danger when we have multiple programs at
the same time as one program may write to the address space of another
program. So every process must be protected against unwanted interference
33. 33
when other process tries to write in a process whether accidental or incidental.
Between relocation and protection requirement a trade-off occurs as the
satisfaction of relocation requirement increases the difficulty of satisfying the
protection requirement.
Prediction of the location of a program in main memory is not possible, that’s
why it is impossible to check the absolute address at compile time to assure
protection. Most of the programming language allows the dynamic calculation
of address at run time. The memory protection requirement must be satisfied
by the processor rather than the operating system because the operating
system can hardly control a process when it occupies the processor. Thus it is
possible to check the validity of memory references.
3. Sharing – A protection mechanism must have allow several processes to
access the same portion of main memory. Allowing each processes access to
the same copy of the program rather than have their own separate copy has an
advantage.
For example, multiple processes may use the same system file and it is natural
to load one copy of the file in main memory and let it shared by those
processes. It is the task of Memory management to allow controlled access to
the shared areas of memory without compromising the protection.
Mechanisms are used to support relocation supported sharing capabilities.
4. Logical organization – Main memory is organized as linear or it can be a
one-dimensional address space which consists of a sequence of bytes or
words. Most of the programs can be organized into modules, some of those
are unmodifiable (read-only, execute only) and some of those contain data that
34. 34
can be modified. To effectively deal with a user program, the operating
system and computer hardware must support a basic module to provide the
required protection and sharing. It has following advantages:
Modules are written and compiled independently and all the
references from one module to another module are resolved by `the
system at run time.
Different modules are provided with different degrees of protection.
There are mechanisms by which modules can be shared among
processes. Sharing can be provided on a module level that lets the user
specify the sharing that is desired.
5. Physical organization – The structure of computer memory has two levels
referred to as main memory and secondary memory. Main memory is
relatively very fast and costly as compared to the secondary memory. Main
memory is volatile. Thus secondary memory is provided for storage of data on
a long-term basis while main memory holds currently used programs. The
major system concern between main memory and secondary memory is the
flow of information and it is impractical for programmers to understand this
for two reasons:
The programmer may engage in a practice known as overlaying when
the main memory available for a program and its data may be
insufficient. It allows different modules to be assigned the same region of
memory. One disadvantage is that it is time-consuming for the
programmer.
35. 35
In a multiprogramming environment, the programmer does not know
how much space will be available at the time of coding and where that
space will be located inside the memory.
Fixed (or static) Partitioning in Operating System
In operating systems, Memory Management is the function responsible for
allocating and managing computer’s main memory. Memory Management function
keeps track of the status of each memory location, either allocated or free to ensure
effective and efficient use of Primary Memory.
There are two Memory Management Techniques: Contiguous, and Non-
Contiguous. In Contiguous Technique, executing process must be loaded entirely
in main-memory. Contiguous Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
FixedPartitioning:
This is the oldest and simplest technique used to put more than one processes in the
main memory. In this partitioning, number of partitions (non-overlapping) in RAM
are fixed but size of each partition may or may not be same. As it
is contiguous allocation, hence no spanning is allowed. Here partition are made
before execution or during system configure.
36. 36
As illustrated in above figure, first process is only consuming 1MB out of 4MB in
the main memory.
Hence, Internal Fragmentation in first block is (4-1) = 3MB.
Sum of Internal Fragmentation in every block = (4-1)+(8-7)+(8-7)+(16-14)=
3+1+1+2 = 7MB.
SupposeprocessP5of size 7MB comes. But this process cannot be accommodated
inspite of available free space because of contiguous allocation (as spanning is not
allowed). Hence, 7MB becomes part of External Fragmentation.
There are some advantages and disadvantages of fixed partitioning.
Advantages of Fixed Partitioning –
1. Easyto implement:
Algorithms needed to implement Fixed Partitioning are easy to implement. It
simply requires putting a process into certain partition without focussing on
the emergence of Internal and External Fragmentation.
2. Little OS overhead: Processing of Fixed Partitioning require lesser excess
and indirect computational power.
Disadvantages of Fixed Partitioning –
37. 37
1. Internal Fragmentation:
Main memory use is inefficient. Any program, no matter how small, occupies
an entire partition. This can cause internal fragmentation.
2. External Fragmentation:
The total unused space(as stated above) of various partitions cannot be used
to load the processeseven though there is spaceavailable but not in the
contiguous form (as spanning is not allowed).
3. Limit process size:
Process ofsize greater than size of partition in Main Memory cannot be
accommodated. Partition size cannot be varied according to the size of
incoming process’s size. Hence, process size of 32MB in above stated
example is invalid.
4. Limitation on Degree of Multiprogramming: Partition in Main Memory
are made before execution or during system configure. Main Memory is
divided into fixed number of partition. Suppose if there are partitions in
RAM and are the number of processes, then condition
must be fulfilled. Number of processes greater than number of partitions in
RAM is invalid in Fixed Partitioning.
Program for First Fit algorithm in Memory Management
Prerequisite : Partition Allocation Methods
In the first fit, the partition is allocated which is first sufficient from the top of
Main Memory.
Example :
Input : blockSize[] = {100, 500, 200, 300, 600};
processSize[] = {212, 417, 112, 426};
38. 38
Output:
Process No. Process Size Block no.
1 212 2
2 417 5
3 112 2
4 426 Not Allocated
Its advantage is that it is the fastest search as it searches only the first block
i.e. enough to assign a process.
It may have problems of not allowing processes to take space even if it was
possible to allocate. Consider the above example, process number 4 (of size
426) does not get memory. However it was possible to allocate memory if we
had allocated using best fit allocation [block number 4 (of size 300) to process
1, block number 2 to process 2, block number 3 to process 3 and block
number 5 to process 4].
Program for Best Fit algorithm in Memory Management
Prerequisite : Partition allocation methods
Best fit allocates the process to a partition which is the smallest sufficient partition
among the free available partitions.
Example:
Input : blockSize[] = {100, 500, 200, 300, 600};
processSize[] = {212, 417, 112, 426};
Output:
Process No. Process Size Block no.
1 212 4
2 417 2
3 112 3
39. 39
4 426 5
Program for Worst Fit algorithm in Memory Management
Prerequisite : Partition allocation methods
Worst Fit allocates a process to the partition which is largest sufficient among the
freely available partitions available in the main memory. If a large process comes
at a later stage, then memory will not have space to accommodate it.
Example:
Input : blockSize[] = {100, 500, 200, 300, 600};
processSize[] = {212, 417, 112, 426};
Output:
Process No. Process Size Block no.
1 212 5
2 417 2
3 112 5
4 426 Not Allocated
40. 40
Virtual Memory
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory and it
is a section of a hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger
than physical memory. Virtual memory serves two purposes. First, it allows
us to extend the use of physical memory by using disk. Second, it allows us
to have memory protection, because each virtual address is translated to a
physical address.
Following are the situations, when entire program is not required to be
loaded fully in main memory.
User written error handling routines are used only when an error occurred in the
data or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
41. 41
The ability to execute a program that is only partially in memory would counter
many benefits.
Less number of I/O would be needed to load or swap each user program into
memory.
A program would no longer be constrained by the amount of physical memory
that is available.
Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory
management unit, or MMU, is built into the hardware. The MMU's job is to
translate virtual addresses into physical addresses. A basic example is given
below −
42. 42
Virtual memory is commonly implemented by demand paging. It can also
be implemented in a segmentation system. Demand segmentation can also
be used to provide virtual memory.
Demand Paging
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance. When a context switch occurs, the operating
system does not copy any of the old program’s pages out to the disk or any
of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that
program’s pages as they are referenced.
43. 43
While executing a program, if the program references a page which is not
available in the main memory because it was swapped out a little ago, the
processor treats this invalid memory reference as a page fault and
transfers control from the program to the operating system to demand the
page back into the memory.
Advantages
Following are the advantages of Demand Paging −
Large virtual memory.
More efficient use of memory.
44. 44
There is no limit on degree of multiprogramming.
Disadvantages
Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
Page Replacement Algorithm
Page replacement algorithms are the techniques using which an Operating
System decides which memory pages to swap out, write to disk when a
page of memory needs to be allocated. Paging happens whenever a page
fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free
pages is lower than required pages.
When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the
algorithm.
A page replacement algorithm looks at the limited information about
accessing the pages provided by hardware, and tries to select which pages
should be replaced to minimize the total number of page misses, while
balancing it with the costs of primary storage and processor time of the
algorithm itself. There are many different page replacement algorithms. We
evaluate an algorithm by running it on a particular string of memory
reference and computing the number of page faults,
Reference String
The string of memory references is called reference string. Reference
strings are generated artificially or by tracing a given system and recording
45. 45
the address of each memory reference. The latter choice produces a large
number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire
address.
If we have a reference to a page p, then any immediately following references
to page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
First In First Out (FIFO) algorithm
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages
at the head.
Optimal Page algorithm
46. 46
An optimal page-replacement algorithm has the lowest page-fault rate of all
algorithms. An optimal page-replacement algorithm exists, and has been called
OPT or MIN.
Replace the page that will not be used for the longest period of time. Use the
time when a page is to be used.
Least Recently Used (LRU) algorithm
Page which has not been used for the longest time in main memory is the one
which will be selected for replacement.
Easy to implement, keep a list, replace pages by looking back into time.