SlideShare una empresa de Scribd logo
1 de 177
Descargar para leer sin conexión
E-528-529, sector-7,
                                    Dwarka, New delhi-110075
                          (Nr. Ramphal chowk and Sector 9 metro station)
                                        Ph. 011-47350606,
                                        (M) 7838010301-04
                                         www.eduproz.in
Educate Anytime...Anywhere...

"Greetings For The Day"

About Eduproz

We, at EduProz, started our voyage with a dream of making higher education available for everyone. Since
its inception, EduProz has been working as a stepping-stone for the students coming from varied
backgrounds. The best part is – the classroom for distance learning or correspondence courses for both
management (MBA and BBA) and Information Technology (MCA and BCA) streams are free of cost.

 Experienced faculty-members, a state-of-the-art infrastructure and a congenial environment for learning -
are the few things that we offer to our students. Our panel of industrial experts, coming from various
industrial domains, lead students not only to secure good marks in examination, but also to get an edge over
others in their professional lives. Our study materials are sufficient to keep students abreast of the present
nuances of the industry. In addition, we give importance to regular tests and sessions to evaluate our
students’ progress.

 Students can attend regular classes of distance learning MBA, BBA, MCA and BCA courses at EduProz
without paying anything extra. Our centrally air-conditioned classrooms, well-maintained library and well-
equipped laboratory facilities provide a comfortable environment for learning.




Honing specific skills is inevitable to get success in an interview. Keeping this in mind, EduProz has a career
counselling and career development cell where we help student to prepare for interviews. Our dedicated
placement cell has been helping students to land in their dream jobs on completion of the course.




EduProz is strategically located in Dwarka, West Delhi (walking distance from Dwarka Sector 9 Metro
Station and 4-minutes drive from the national highway); students can easily come to our centre from
anywhere Delhi and neighbouring Gurgaon, Haryana and avail of a quality-oriented education facility at
apparently no extra cost.




Why Choose Edu Proz for distance learning?


    •    Edu Proz provides class room facilities free of cost.
    •    In EduProz Class room teaching is conducted through experienced faculty.
    •    Class rooms are spacious fully air-conditioned ensuring comfortable ambience.
    •    Course free is not wearily expensive.
    •    Placement assistance and student counseling facilities.
    •    Edu Proz unlike several other distance learning courses strives to help and motivate pupils to get
high grades thus ensuring that they are well placed in life.
   •   Students are groomed and prepared to face interview boards.
   •   Mock tests, unit tests and examinations are held to evaluate progress.
   •   Special care is taken in the personality development department.




                                                    "HAVE A GOOD DAY"




                            Karnataka State Open University

(KSOU) was established on 1st June 1996 with the assent of H.E. Governor of
Karnataka
as a full fledged University in the academic year 1996 vide Government
notification
No/EDI/UOV/dated 12th February 1996 (Karnataka State Open University
Act – 1992).
The act was promulgated with the object to incorporate an Open University at the
State level for the introduction and promotion of Open University and Distance
Education systems in the
education pattern of the State and the country for the Co-ordination and
determination of standard of such systems. Keeping in view the educational
needs of our country, in general, and state in particular the policies and
programmes have been geared to cater to the needy.

Karnataka State Open University is a UGC recognised University of Distance
Education Council (DEC), New Delhi, regular member of the Association of
Indian Universities (AIU), Delhi, permanent member of Association of
Commonwealth Universities (ACU), London, UK, Asian Association of Open
Universities (AAOU), Beijing, China, and also has association with
Commonwealth of Learning (COL).

Karnataka State Open University is situated at the North–Western end of the
Manasagangotri campus, Mysore. The campus, which is about 5 kms, from the
city centre, has a serene atmosphere ideally suited for academic pursuits. The
University houses at present the Administrative Office, Academic Block, Lecture
Halls, a well-equipped Library, Guest House
Cottages, a Moderate Canteen, Girls Hostel and a few cottages providing limited
accommodation to students coming to Mysore for attending the Contact
Programmes or Term-end examinations.
Unit 1: Overview of the Operating Systems:

 This unit covers introduction, evolution of OS. And also covers the OS components and
its services.



Introduction to Operating Systems

Programs, Code files, Processes and Threads

   •   A sequence of instructions telling the computer what to do is called a program.
       The user normally uses a text editor to write their program in a high level
       language, such as Pascal, C, Java, etc. Alternatively, they may write it in
       assembly language. Assembly language is a computer language whose statements
       have an almost one to one correspondence to the instructions understood by the
       CPU of the computer. It provides a way of specifying in precise detail what
       machine code the assembler should create.

       A compiler is used to translate a high level language program into assembly
       language or machine code, and an assembler is used to translate an assembly
       language program into machine code. A linker is used to combine relocatable
       object files (code files corresponding to incomplete portions of a program) into
       executable code files (complete code files, for which the addresses have been
       resolved for all global functions and variables).

       The text for a program written in a high level language or assembly language is
       normally saved in a source file on disk. Machine code for a program is normally
       saved in a code file on disk. The machine code is loaded into the virtual memory
       for a process, when the process attempts to execute the program.

       The notion of a program is becoming more complex nowadays, because of
       shared libraries. In the old days, the user code for a process was all in one file.
       However, with GUI libraries becoming so large, this is no longer possible.
       Library code is now stored in memory that is shared by all processes that use it.
       Perhaps it is best to use the term program for the machine code stored in or
       derived from a single code file.

       Code files contain more than just machine code. On UNIX, a code file starts with
       a header, containing information on the position and size of the code (”text”),
       initialised data, and uninitialised data segments of the code file. The header also
       contains other information, such as the initial value to give the program counter
       (the “entry point”) and global pointer register. The data for the code and
       initialised data segments then follows.
As well the above information, code files can contain a symbol table – a table
    indicating the names of all functions and global variables, and the virtual
    addresses they correspond to. The symbol table is used by the linker, when it
    combines several relocatable object files into a single executable code file, to
    resolve references to functions in shared libraries. The symbol table is also used
    for debugging. The structure of UNIX code files on the Alpha is very complex,
    due to the use of shared libraries.

•   When a user types in the name of a command in the UNIX shell, this results in the
    creation of what is called a process. On any large computer, especially one with
    more than one person using it at the same time, there are normally many
    processes executing at any given time. Under UNIX, every time a user types in a
    command, they create a separate process. If several users execute the same
    command, then each one creates a different process. The Macintosh is a little
    different from UNIX. If the user double clicks on several data files for an
    application, only one process is created, and this process manages all the data
    files.

    A process is the virtual memory, and information on open files, and other
    operating system resources, shared by its threads of execution, all executing in the
    same virtual memory.

    The threads in a process execute not only the code from a user program. They can
    also execute the shared library code, operating system kernel code, and (on the
    Alpha) what is called PALcode.

    A process is created to execute a command. The code file for the command is
    used to initialise the virtual memory containing the user code and global
    variables. The user stack for the initial thread is cleared, and the parameters to the
    command are passed as parameters to the main function of the program. Files are
    opened corresponding to the standard input and output (keyboard and screen,
    unless file redirection is used).

    When a process is created, it is created with a single thread of execution.
    Conventional processes never have more than a single thread of execution, but
    multi-threaded processes are now becoming common place. We often speak about
    a program executing, or a process executing a program, when we really mean a
    thread within the process executes the program.

    In UNIX, a new process executing a new program is created by the fork() system
    call (which creates an almost identical copy of an existing process, executing the
    same program), followed by the exec() system call (which replaces the program
    being executed by the new program).
In the Java programming language, a new process executing a new program is
    created by the exec() method in the Runtime class. The Java exec() is probably
    implemented as a combination of the UNIX fork() and exec() system calls.

•   A thread is an instance of execution (the entity that executes). All the threads that
    make up a process share access to the same user program, virtual memory, open
    files, and other operating system resources. Each thread has its own program
    counter, general purpose registers, and user and kernel stack. The program
    counter and general purpose registers for a thread are stored in the CPU when the
    thread is executing, and saved away in memory when it is not executing.

    The Java programming language supports the creation of multiple threads. To
    create a thread in Java, we create an object that implements the Runnable
    interface (has a run() method), and use this to create a new Thread object. To
    initiate the execution of the thread, we invoke the start() method of the thread,
    which invokes the run() method of the Runnable object. The threads that make up
    a process need to use some kind of synchronisation mechanism to avoid more
    than one thread accessing shared data at the same time. In Java, synchronisation is
    done by synchronised methods. The wait(), notifyO, and notifyAU() methods in
    the Object class are used to allow a thread to wait until the data has been updated
    by another thread, and to notify other threads when the data has been altered.

    In UNIX C, the pthreads library contains functions to create new threads, and
    provide the equivalent of synchronised methods, waitO, notifyO, etc. The Java
    mechanism is in fact based on the pthreads library. In Java, synchronisation is
    built into the design of the language (the compiler knows about synchronised
    methods). In C, there is no syntax to specify that a function (method) is
    synchronised, and the programmer has to explicitly put in code at the start and
    end of the method to gain and relinquish exclusive access to a data structure.

    Some people call threads lightweight processes, and processes heavyweight
    processes. Some people call processes tasks.

    Many application programs, such as Microsoft word, are starting to make use of
    multiple threads. For example, there is a thread that processes the input, and a
    thread for doing repagination in the background. A compiler could have multiple
    threads, one for lexical analysis, one for parsing, one for analysing the abstract
    syntax tree. These can all execute in parallel, although the parser cannot execute
    ahead of the lexical analyser, and the abstract syntax tree analyser can only
    process the portion of the abstract syntax tree already generated by the parser. The
    code for performing graphics can easily be sped up by having multiple threads,
    each painting a portion of the screen. File and network servers have to deal with
    multiple external requests, many of which block before the reply is given. An
    elegant way of programming servers is to have a thread for each request.
Multi-threaded processes are becoming very important, because computers with
multiple processors are becoming commonplace, as are distributed systems, and servers.
It is important that you learn how to program in this manner. Multi-threaded
programming, particularly dealing with synchronisation issues, is not trivial, and a good
conceptual understanding of synchronisation is essential. Synchronisation is dealt with
fully in the stage 3 operating systems paper.

Objectives

An operating system can be thought of as having three objectives:

Convenience: An operating system makes a computer more convenient to use.

Efficiency: An operating system allows the computer system resources to be used in an
efficient manner.

Ability to evolve: An operating system should be constructed in such a way as to permit
the effective development, testing and introduction of new system functions without
interfering with current services provided.

What is an Operating System?

An operating system (OS) is a program that controls the execution of an application
program and acts as an interface between the user and computer hardware. The purpose
of an OS is to provide an environment in which a user can execute programs in a
convenient and efficient manner.

The operating system must provide certain services to programs and to the users of those
programs in order to make the programming task easier, these services will differ from
one OS to another.

Functions of an Operating System

Modern Operating systems generally have following three major goals. Operating
systems generally accomplish these goals by running processes in low privilege and
providing service calls that invoke the operating system kernel in high-privilege state.

To hide details of hardware

An abstraction is software that hides lower level details and provides a set of higher-level
functions. An operating system transforms the physical world of devices, instructions,
memory, and time into virtual world that is the result of abstractions built by the
operating system. There are several reasons for abstraction.
First, the code needed to control peripheral devices is not standardized. Operating
systems provide subroutines called device drivers that perform operations on behalf of
programs for example, input/output operations.

Second, the operating system introduces new functions as it abstracts the hardware. For
instance, operating system introduces the file abstraction so that programs do not have to
deal with disks.

Third, the operating system transforms the computer hardware into multiple virtual
computers, each belonging to a different program. Each program that is running is called
a process. Each process views the hardware through the lens of abstraction.

Fourth, the operating system can enforce security through abstraction.

Resources Management

An operating system as resource manager, controls how processes (the active agents)
may access resources (passive entities). One can view Operating Systems from two
points of views: Resource manager and Extended machines. Form Resource manager
point of view Operating Systems manage the different parts of the system efficiently and
from extended machines point of view Operating Systems provide a virtual machine to
users that is more convenient to use. The structurally Operating Systems can be design as
a monolithic system, a hierarchy of layers, a virtual machine system, a micro-kernel, or
using the client-server model. The basic concepts of Operating Systems are processes,
memory management, I/O management, the file systems, and security.

Provide a effective user interface

The user interacts with the operating systems through the user interface and usually
interested in the look and feel of the operating system. The most important components
of the user interface are the command interpreter, the file system, on-line help, and
application integration. The recent trend has been toward increasingly integrated
graphical user interfaces that encompass the activities of multiple processes on networks
of computers.

Evolution of Operating System

Operating system and computer architecture have had a great deal of influence on each
other. To facilitate the use of the hardware, OS’s were developed. As operating systems
were designed and used, it became obvious that changes in the design of the hardware
could simplify them.

Early Systems

In the earliest days of electronic digital computing, everything was done on the bare
hardware. Very few computers existed and those that did exist were experimental in
nature. The researchers who were making the first computers were also the programmers
and the users. They worked directly on the “bare hardware”. There was no operating
system. The experimenters wrote their programs in assembly language and a running
program had complete control of the entire computer. Debugging consisted of a
combination of fixing both the software and hardware, rewriting the object code and
changing the actual computer itself.

The lack of any operating system meant that only one person could use a computer at a
time. Even in the research lab, there were many researchers competing for limited
computing time. The first solution was a reservation system, with researchers signing up
for specific time slots.

The high cost of early computers meant that it was essential that the rare computers be
used as efficiently as possible. The reservation system was not particularly efficient. If a
researcher finished work early, the computer sat idle until the next time slot. If the
researcher’s time ran out, the researcher might have to pack up his or her work in an
incomplete state at an awkward moment to make room for the next researcher. Even
when things were going well, a lot of the time the computer actually sat idle while the
researcher studied the results (or studied memory of a crashed program to figure out what
went wrong).

The solution to this problem was to have programmers prepare their work off-line on
some input medium (often on punched cards, paper tape, or magnetic tape) and then hand
the work to a computer operator. The computer operator would load up jobs in the order
received (with priority overrides based on politics and other factors). Each job still ran
one at a time with complete control of the computer, but as soon as a job finished, the
operator would transfer the results to some output medium (punched tape, paper tape,
magnetic tape, or printed paper) and deliver the results to the appropriate programmer. If
the program ran to completion, the result would be some end data. If the program
crashed, memory would be transferred to some output medium for the programmer to
study (because some of the early business computing systems used magnetic core
memory, these became known as “core dumps”)

Soon after the first successes with digital computer experiments, computers moved out of
the lab and into practical use. The first practical application of these experimental digital
computers was the generation of artillery tables for the British and American armies.
Much of the early research in computers was paid for by the British and American
militaries. Business and scientific applications followed.

As computer use increased, programmers noticed that they were duplicating the same
efforts.

Every programmer was writing his or her own routines for I/O, such as reading input
from a magnetic tape or writing output to a line printer. It made sense to write a common
device driver for each input or output device and then have every programmer share the
same device drivers rather than each programmer writing his or her own. Some
programmers resisted the use of common device drivers in the belief that they could write
“more efficient” or faster or “”better” device drivers of their own.

Additionally each programmer was writing his or her own routines for fairly common
and repeated functionality, such as mathematics or string functions. Again, it made sense
to share the work instead of everyone repeatedly “reinventing the wheel”. These shared
functions would be organized into libraries and could be inserted into programs as
needed. In the spirit of cooperation among early researchers, these library functions were
published and distributed for free, an early example of the power of the open source
approach to software development.

Simple Batch Systems

When punched cards were used for user jobs, processing of a job involved physical
actions by the system operator, e.g., loading a deck of cards into the card reader, pressing
switches on the computer’s console to initiate a job, etc. These actions wasted a lot of
central processing unit (CPU) time.

                                  Operating System
                                  User Program Area

                               Figure 1.1: Simple Batch System

To speed up processing, jobs with similar needs were batched together and were run as a
group. Batch processing (BP) was implemented by locating a component of the BP
system, called the batch monitor or supervisor, permanently in one part of computer’s
memory. The remaining memory was used to process a user job – the current job in the
batch as shown in the figure 1.1 above.

The delay between job submission and completion was considerable in batch processed
system as a number of programs were put in a batch and the entire batch had to be
processed before the results were printed. Further card reading and printing were slow as
they used slower mechanical units compared to CPU which was electronic. The speed
mismatch was of the order of 1000. To alleviate this problem programs were spooled.
Spool is an acronym for simultaneous peripheral operation on-line. In essence the idea
was to use a cheaper processor known as peripheral processing unit (PPU) to read
programs and data from cards store them on a disk. The faster CPU read programs/data
from the disk processed them and wrote the results back on the disk. The cheaper
processor then read the results from the disk and printed them.

Multi Programmed Batch Systems

Even though disks are faster than card reader/ printer they are still two orders of
magnitude slower than CPU. It is thus useful to have several programs ready to run
waiting in the main memory of CPU. When one program needs input/output (I/O) from
disk it is suspended and another program whose data is already in main memory (as
shown in the figure 1.2 bellow) is taken up for execution. This is called
multiprogramming.



                                   Operating System
                                   Program 1
                                   Program 2
                                   Program 3
                                   Program 4

                        Figure 1.2: Multi Programmed Batch Systems

Multiprogramming (MP) increases CPU utilization by organizing jobs such that the CPU
always has a job to execute. Multiprogramming is the first instance where the operating
system must make decisions for the user.

The MP arrangement ensures concurrent operation of the CPU and the I/O subsystem. It
ensures that the CPU is allocated to a program only when it is not performing an I/O
operation.

Time Sharing Systems

Multiprogramming features were superimposed on BP to ensure good utilization of CPU
but from the point of view of a user the service was poor as the response time, i.e., the
time elapsed between submitting a job and getting the results was unacceptably high.
Development of interactive terminals changed the scenario. Computation became an on-
line activity. A user could provide inputs to a computation from a terminal and could also
examine the output of the computation on the same terminal. Hence, the response time
needed to be drastically reduced. This was achieved by storing programs of several users
in memory and providing each user a slice of time on CPU to process his/her program.

Distributed Systems

A recent trend in computer system is to distribute computation among several processors.
In the loosely coupled systems the processors do not share memory or a clock. Instead,
each processor has its own local memory. The processors communicate with one another
using communication network.

The processors in a distributed system may vary in size and function, and referred by a
number of different names, such as sites, nodes, computers and so on depending on the
context. The major reasons for building distributed systems are:
Resource sharing: If a number of different sites are connected to one another, then a
user at one site may be able to use the resources available at the other.

Computation speed up: If a particular computation can be partitioned into a number of
sub computations that can run concurrently, then a distributed system may allow a user to
distribute computation among the various sites to run them concurrently.

Reliability: If one site fails in a distributed system, the remaining sites can potentially
continue operations.

Communication: There are many instances in which programs need to exchange data
with one another. Distributed data base system is an example of this.

Real-time Operating System

The advent of timesharing provided good response times to computer users. However,
timesharing could not satisfy the requirements of some applications. Real-time (RT)
operating systems were developed to meet the response requirements of such
applications.

There are two flavors of real-time systems. A hard real-time system guarantees that
critical tasks complete at a specified time. A less restrictive type of real time system is
soft real-time system, where a critical real-time task gets priority over other tasks, and
retains that priority until it completes. The several areas in which this type is useful are
multimedia, virtual reality, and advance scientific projects such as undersea exploration
and planetary rovers. Because of the expanded uses for soft real-time functionality, it is
finding its way into most current operating systems, including major versions of Unix and
Windows NT OS.

A real-time operating system is one, which helps to fulfill the worst-case response time
requirements of an application. An RT OS provides the following facilities for this
purpose:

1.   Multitasking within an application.

2.   Ability to define the priorities of tasks.

3.   Priority driven or deadline oriented scheduling.

     4. Programmer defined interrupts.

A task is a sub-computation in an application program, which can be executed
concurrently with other sub-computations in the program, except at specific places in its
execution called synchronization points. Multi-tasking, which permits the existence of
many tasks within the application program, provides the possibility of overlapping the
CPU and I/O activities of the application with one another. This helps in reducing its
elapsed time. The ability to specify priorities for the tasks provides additional controls to
a designer while structuring an application to meet its response-time requirements.

Real time operating systems (RTOS) are specifically designed to respond to events that
happen in real time. This can include computer systems that run factory floors, computer
systems for emergency room or intensive care unit equipment (or even the entire ICU),
computer systems for air traffic control, or embedded systems. RTOSs are grouped
according to the response time that is acceptable (seconds, milliseconds, microseconds)
and according to whether or not they involve systems where failure can result in loss of
life. Examples of real-time operating systems include QNX, Jaluna-1, ChorusOS,
LynxOS, Windows CE .NET, and VxWorks AE, etc.

Self assessment questions

   1.    What do the terms program, process, and thread mean?

   2.   What is the purpose of a compiler, assembler and linker?

   3. What is the structure of a code file? What is the purpose of the symbol table in a
   code file?

   4.    Why are shared libraries essential on modern computers?

Operating System Components

Even though, not all systems have the same structure many modern operating systems
share the same goal of supporting the following types of system components.

Process Management

The operating system manages many kinds of activities ranging from user programs to
system programs like printer spooler, name servers, file server etc. Each of these
activities is encapsulated in a process. A process includes the complete execution context
(code, data, PC, registers, OS resources in use etc.).

It is important to note that a process is not a program. A process is only ONE instant of a
program in execution. There are many processes can be running the same program. The
five major activities of an operating system in regard to process management
are1. Creation         and      deletion      of     user     and     system     processes.
2. Suspension                and              resumption            of           processes.
3. A               mechanism               for            process          synchronization.
4. A               mechanism               for            process          communication.
5. A mechanism for deadlock handling.
Main-Memory Management

Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte
has its own address. Main-memory provides storage that can be access directly by the
CPU. That is to say for a program to be executed, it must in the main memory.

The major activities of an operating in regard to memory-management are:

   1.   Keep track of which part of memory are currently being used and by whom.

   2. Decide which processes are loaded into memory when memory space becomes
   available.

   3.   Allocate and de-allocate memory space as needed.

File Management

A file is a collection of related information defined by its creator. Computer can store
files on the disk (secondary storage), which provides long term storage. Some examples
of storage media are magnetic tape, magnetic disk and optical disk. Each of these media
has its own properties like speed, capacity, data transfer rate and access methods.

A file system normally organized into directories to ease their use. These directories may
contain files and other directions.

The five main major activities of an operating system in regard to file management are

   1.   The creation and deletion of files.
   2.   The creation and deletion of directions.
   3.   The support of primitives for manipulating files and directions.
   4.   The mapping of files onto secondary storage.
   5.   The back up of files on stable storage media.

I/O System Management

I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the
device driver knows the peculiarities of the specific device to whom it is assigned.

Secondary-Storage Management

Generally speaking, systems have several levels of storage, including primary storage,
secondary storage and cache storage. Instructions and data must be placed in primary
storage or cache to be referenced by a running program. Because main memory is too
small to accommodate all data and programs, and its data are lost when power is lost, the
computer system must provide secondary storage to back up main memory. Secondary
storage consists of tapes, disks, and other media designed to hold information that will
eventually be accessed in primary storage (primary, secondary, cache) is ordinarily
divided into bytes or words consisting of a fixed number of bytes. Each location in
storage has an address; the set of all addresses available to a program is called an address
space.

The three major activities of an operating system in regard to secondary storage
management are:

   1. Managing the free space available on the secondary-storage device.
   2. Allocation of storage space when new files have to be written.
   3. Scheduling the requests for memory access.



Networking

A distributed system is a collection of processors that do not share memory, peripheral
devices, or a clock. The processors communicate with one another through
communication lines called network. The communication-network design must consider
routing and connection strategies, and the problems of contention and security.



Protection System

If a computer system has multiple users and allows the concurrent execution of multiple
processes, then various processes must be protected from one another’s activities.
Protection refers to mechanism for controlling the access of programs, processes, or users
to the resources defined by a computer system.



Command Interpreter System

A command interpreter is an interface of the operating system with the user. The user
gives commands with are executed by operating system (usually by turning them into
system calls). The main function of a command interpreter is to get and execute the next
user specified command. Command-Interpreter is usually not part of the kernel, since
multiple command interpreters (shell, in UNIX terminology) may be supported by an
operating system, and they do not really need to run in kernel mode. There are two main
advantages of separating the command interpreter from the kernel.

   1. If we want to change the way the command interpreter looks, i.e., I want to
      change the interface of command interpreter, I am able to do that if the command
      interpreter is separate from the kernel. I cannot change the code of the kernel so I
      cannot modify the interface.
2. If the command interpreter is a part of the kernel, it is possible for a malicious
      process to gain access to certain part of the kernel that it should not have. To
      avoid this scenario it is advantageous to have the command interpreter separate
      from kernel.



Self Assessment Questions

   1. Discuss the various components of OS?
   2. Explain the Memory Management and File Management in brief.
   3. Write Note on.
         1. Secondary-Storage Management
         2. Command Interpreter System

Operating System Services

Following are the five services provided by operating systems for the convenience of the
users.

Program Execution

The purpose of a computer system is to allow the user to execute programs. So the
operating system provides an environment where the user can conveniently run programs.
The user does not have to worry about the memory allocation or multitasking or
anything. These things are taken care of by the operating systems.

Running a program involves the allocating and de-allocating memory, CPU scheduling in
case of multi-process. These functions cannot be given to the user-level programs. So
user-level programs cannot help the user to run programs independently without the help
from operating systems.

I/O Operations

Each program requires an input and produces output. This involves the use of I/O. The
operating systems hides from the user the details of underlying hardware for the I/O. All
the users see that the I/O has been performed without any details. So the operating
system, by providing I/O, makes it convenient for the users to run programs.

For efficiently and protection users cannot control I/O so this service cannot be provided
by user-level programs.



File System Manipulation
The output of a program may need to be written into new files or input taken from some
files. The operating system provides this service. The user does not have to worry about
secondary storage management. User gives a command for reading or writing to a file
and sees his/her task accomplished. Thus operating system makes it easier for user
programs to accomplish their task.

This service involves secondary storage management. The speed of I/O that depends on
secondary storage management is critical to the speed of many programs and hence I
think it is best relegated to the operating systems to manage it than giving individual
users the control of it. It is not difficult for the user-level programs to provide these
services but for above mentioned reasons it is best if this service is left with operating
system.



Communications

There are instances where processes need to communicate with each other to exchange
information. It may be between processes running on the same computer or running on
the different computers. By providing this service the operating system relieves the user
from the worry of passing messages between processes. In case where the messages need
to be passed to processes on the other computers through a network, it can be done by the
user programs. The user program may be customized to the specifications of the
hardware through which the message transits and provides the service interface to the
operating system.



Error Detection

An error in one part of the system may cause malfunctioning of the complete system. To
avoid such a situation the operating system constantly monitors the system for detecting
the errors. This relieves the user from the worry of errors propagating to various part of
the system and causing malfunctioning.

This service cannot be allowed to be handled by user programs because it involves
monitoring and in cases altering area of memory or de-allocation of memory for a faulty
process, or may be relinquishing the CPU of a process that goes into an infinite loop.
These tasks are too critical to be handed over to the user programs. A user program if
given these privileges can interfere with the correct (normal) operation of the operating
systems.



Self Assessment Questions
1.   Explain the five services provided by the operating system.

Operating Systems for Different Computers

Operating systems can be grouped according to functionality: operating systems for
Supercomputers, Computer Clusters, Mainframes, Servers, Workstations, Desktops,
Handheld Devices, Real Time Systems, or Embedded Systems.

OS for Supercomputers:

Supercomputers are the fastest computers, very expensive and are employed for
specialized applications that require immense amounts of mathematical calculations, for
example, weather forecasting, animated graphics, fluid dynamic calculations, nuclear
energy research, and petroleum exploration. Out of many operating systems used for
supercomputing UNIX and Linux are the most dominant ones.

Computer Clusters Operating Systems:

A computer cluster is a group of computers that work together closely so that in many
respects they can be viewed as though they are a single computer. The components of a
cluster are commonly, connected to each other through fast local area networks. Besides
many open source operating systems, and two versions of Windows 2003 Server, Linux
is popularly used for Computer clusters.

Mainframe Operating Systems:

Mainframes used to be the primary form of computer. Mainframes are large centralized
computers and at one time they provided the bulk of business computing through time
sharing. Mainframes are still useful for some large scale tasks, such as centralized billing
systems, inventory systems, database operations, etc.

Minicomputers were smaller, less expensive versions of mainframes for businesses that
couldn’t afford true mainframes. The chief difference between a supercomputer and a
mainframe is that a supercomputer channels all its power into executing a few programs
as fast as possible, whereas a mainframe uses its power to execute many programs
concurrently. Besides various versions of operating systems by IBM for its early
System/360, to newest Z series operating system z/OS, Unix and Linux are also used as
mainframe operating systems.

Servers Operating Systems:

Servers are computers or groups of computers that provides services to other computers,
connected via network. Based on the requirements, there are various versions of server
operating systems from different vendors, starting with Microsoft’s Servers from
Windows NT to Windows 2003, OS/2 servers, UNIX servers, Mac OS servers, and
various flavors of Linux.
Workstation Operating Systems:

Workstations are more powerful versions of personal computers. Like desktop
computers, often only one person uses a particular workstation, and run a more powerful
version of a desktop operating system. Most of the times workstations are used as clients
in a network environment. The popular workstation operating systems are Windows NT
Workstation, Windows 2000 Professional, OS/2 Clients, Mac OS, UNIX, Linux, etc



Desktop Operating Systems:

A personal computer (PC) is a microcomputer whose price, size, and capabilities make it
useful for individuals, also known as Desktop computers or home computers

Desktop operating systems are used for personal computers, for example DOS, Windows
9x, Windows XP, Macintosh OS, Linux, etc.

Embedded Operating Systems:

Embedded systems are combinations of processors and special software that are inside of
another device, such as the electronic ignition system on cars. Examples of embedded
operating systems are Embedded Linux, Windows CE, Windows XP Embedded, Free
DOS, Free RTOS, etc.

Operating Systems for Handheld Computers:

Handheld operating systems are much smaller and less capable than desktop operating
systems, so that they can fit into the limited memory of handheld devices. The operating
systems include Palm OS, Windows CE, EPOC, and Summary

An operating system (OS) is a program that controls the execution of an application
program and acts as an interface between the user and computer hardware. The objectives
of operating system are convenience, efficiency, and ability to evolve. Besides this the
operating system performs function such as hiding details of the hardware, resource
management, and providing effective user interface.

The process management component of operating system is responsible for creation,
termination, other and state transitions of a process. The memory management unit is
mainly responsible for allocation, de-allocation to processes, and keeping track records of
memory usage by different processes. The operating system services are program
execution, I/O operations, file system manipulation, communication and error detection.



Terminal Questions
1.  What is an operating system?
   2.  What are the objectives of an operating system?
   3.  Describe in brief, the function of an operating system.
   4.  Explain the evolution of operating system in brief.
   5.  Write a note on Batch OS. Discuss how it is differ from Multi Programmed Batch
       Systems.
   6. What is difference between multi-programming and timesharing operating
       systems?
   7. What are the typical features of an operating system provides?
   8. Explain the functions of operating system as file manager.
   9. What are different services provided by an operating system?
   10. Write Note on :
            1.Mainframe Operating Systems
            2.Embedded Operating Systems
            3.Servers Operating Systems
            4.Desktop Operating Systems

many Linux versions such as Qt Palmtop, and Pocket Linux, etc.




Unit 2: Operating System Architecture :

 This unit deals with the Simple structure, extended machine, layered approaches. It
covers the different methodology for OS design (Models). It covers the Introduction of
Virtual Machine, Virtual environment and Machine aggregation. And also describes the
implementation techniques.



Introduction

A system as large and complex as a modern operating system must be engineered
carefully if it is to function properly and be modified easily. A common approach is to
partition the task into small component rather than have one monolithic system. Each of
these modules should be a well-defined portion of the system, with carefully defined
inputs, outputs, and functions. In this unit, we discuss how various components of an
operating system are interconnected and melded into a kernel.



Objective:

At the end of this unit, readers would be able to understand:
•   What is Kernel? Monolithic Kernel Architecture
     •   Layered Architecture
     •   Microkernel Architecture
     •   Operating System Components
     •   Operating System Services

OS as an Extended Machine

We can think of an operating system as an Extended Machine standing between our
programs and the bare hardware.




As shown in above figure 2.1, the operating system interacts with the hardware hiding it
from the application program, and user. Thus it acts as interface between user programs
and hardware.

Self Assessment Questions

1.   What is the role of an Operating System?

Simple Structure

Many commercial systems do not have well-defined structures. Frequently, such
operating systems started as small, simple, and limited systems and then grew beyond
their original scope. MS-DOS is an example of such a system. It was originally designed
and implemented by a few people who had no idea that it would become so popular. It
was written to provide the most functionality in the least space, so it was not divided into
modules carefully. In MS-DOS, the interfaces and levels of functionality are not well
separated. For instance, application programs are able to access the basic I/O routines to
write directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to
errant (or malicious) programs, causing entire system crashes when user programs fail.
Of course, MS-DOS was also limited by the hardware of its era. Because the Intel 8088
for which it was written provides no dual mode and no hardware protection, the designers
of MS-DOS had no choice but to leave the base hardware accessible.

Another example of limited structuring is the original UNIX operating system. UNIX is
another system that initially was limited by hardware functionality. It consists of two
separable parts:

                  •   the kernel and
                  •   the system programs

The kernel is further separated into a series of interfaces and device drivers, which have
been added and expanded over the years as UNIX has evolved. We can view the
traditional UNIX operating system as being layered. Everything below the system call
interface and above the physical hardware is the kernel. The kernel provides the file
system, CPU scheduling, memory management, and other operating-system functions
through system calls. Taken in sum, that is an enormous amount of functionality to be
combined into one level. This monolithic structure was difficult to implement and
maintain.

Self Assessment Questions

   1. ”In MS-DOS, the interfaces and levels of functionality are not well separated”.
   Comment on this.

   2.   What are the components of a Unix Operating System?

Layered Approach

With proper hardware support, operating systems can be broken into pieces that are
smaller and more appropriate than those allowed by the original MS-DOS or UNIX
systems. The operating system can then retain much greater control over the computer
and over the applications that make use of that computer. Implementers have more
freedom in changing the inner workings of the system and in creating modular operating
systems. Under the top-down approach, the overall functionality and features are
determined and the separated into components. Information hiding is also important,
because it leaves programmers free to implement the low-level routines as they see fit,
provided that the external interface of the routine stays unchanged and that the routine
itself performs the advertised task.
A system can be made modular in many ways. One method is the layered approach, in
which the operating system is broken up into a number of layers (levels). The bottom
layer (layer 0) id the hardware; the highest (layer N) is the user interface.



                                               Users
                                       File Systems
                               Inter-process Communication
                               I/O and Device Management
                                     Virtual Memory
                              Primitive Process Management
                                        Hardware

                               Fig. 2.2: Layered Architecture

An operating-system layer is an implementation of an abstract object made up of data and
the operations that can manipulate those data. A typical operating – system layer-say,
layer M-consists of data structures and a set of routines that can be invoked by higher-
level layers. Layer M, in turn, can invoke operations on lower-level layers.

The main advantage of the layered approach is simplicity of construction and debugging.
The layers are selected so that each uses functions (operations) and services of only
lower-level layers. This approach simplifies debugging and system verification. The first
layer can be debugged without any concern for the rest of the system, because, by
definition, it uses only the basic hardware (which is assumed correct) to implement its
functions. Once the first layer is debugged, its correct functioning can be assumed while
the second layer is debugged, and so on. If an error is found during debugging of a
particular layer, the error must be on that layer, because the layers below it are already
debugged. Thus, the design and implementation of the system is simplified.

Each layer is implemented with only those operations provided by lower-level layers. A
layer does not need to know how these operations are implemented; it needs to know
only what these operations do. Hence, each layer hides the existence of certain data
structures, operations, and hardware from higher-level layers. The major difficulty with
the layered approach involves appropriately defining the various layers. Because layer
can use only lower-level layers, careful planning is necessary. For example, the device
driver for the backing store (disk space used by virtual-memory algorithms) must be at a
lower level than the memory-management routines, because memory management
requires the ability to use the backing store.

Other requirement may not be so obvious. The backing-store driver would normally be
above the CPU scheduler, because the driver may need to wait for I/O and the CPU can
be rescheduled during this time. However, on a larger system, the CPU scheduler may
have more information about all the active processes than can fit in memory. Therefore,
this information may need to be swapped in and out of memory, requiring the backing-
store driver routine to be below the CPU scheduler.



A final problem with layered implementations is that they tend to be less efficient than
other types. For instance, when a user program executes an I/O operation, it executes a
system call that is trapped to the I/O layer, which calls the memory-management layer,
which in turn calls the CPU-scheduling layer, which is then passed to the hardware. At
each layer, the parameters may be modified; data may need to be passed, and so on. Each
layer adds overhead to the system call; the net result is a system call that takes longer
than does one on a non-layered system. These limitations have caused a small backlash
against layering in recent years. Fewer layers with more functionality are being designed,
providing most of the advantages of modularized code while avoiding the difficult
problems of layer definition and interaction.

Self Assessment Questions

1.   What is the layered Architecture of UNIX?

2.   What are the advantages of layered Architecture?

Micro-kernels

We have already seen that as UNIX expanded, the kernel became large and difficult to
manage. In the mid-1980s, researches at Carnegie Mellon University developed an
operating system called Mach that modularized the kernel using the microkernel
approach. This method structures the operating system by removing all nonessential
components from the kernel and implementing then as system and user-level programs.
The result is a smaller kernel. There is little consensus regarding which services should
remain in the kernel and which should be implemented in user space. Typically, however,
micro-kernels provide minimal process and memory management, in addition to a
communication facility.




                  Device
                           File Server   Client Process        Virtual Memory
                 Drivers                                  ….
Microkernel
                                        Hardware

Fig. 2.3: Microkernel Architecture

The main function of the microkernel is to provide a communication facility between the
client program and the various services that are also running in user space.
Communication is provided by message passing. For example, if the client program and
service never interact directly. Rather, they communicate indirectly by exchanging
messages with the microkernel.

On benefit of the microkernel approach is ease of extending the operating system. All
new services are added to user space and consequently do not require modification of the
kernel. When the kernel does have to be modified, the changes tend to be fewer, because
the microkernel is a smaller kernel. The resulting operating system is easier to port from
one hardware design to another. The microkernel also provided more security and
reliability, since most services are running as user – rather than kernel – processes, if a
service fails the rest of the operating system remains untouched.

Several contemporary operating systems have used the microkernel approach. Tru64
UNIX (formerly Digital UNIX provides a UNIX interface to the user, but it is
implemented with a March kernel. The March kernel maps UNIX system calls into
messages to the appropriate user-level services.

The following figure shows the UNIX operating system architecture. At the center is
hardware, covered by kernel. Above that are the UNIX utilities, and command interface,
such as shell (sh), etc.
SelAssessment Questions

   1. What other facilities Micro-kernel provides in addition to Communication
   facility?

   2.   What are the benefits of Micro-kernel?

UNIX kernel Components

The UNIX kernel has components as depicted in the figure 2.5 bellow. The figure is
divided in to three modes: user mode, kernel mode, and hardware. The user mode
contains user programs which can access the services of the kernel components using
system call interface.

The kernel mode has four major components: system calls, file subsystem, process
control subsystem, and hardware control. The system calls are interface between user
programs and file and process control subsystems. The file subsystem is responsible for
file and I/O management through device drivers.
The process control subsystem contains scheduler, Inter-process communication and
memory management. Finally the hardware control is the interface between these two
subsystems and hardware.




                             Fig. 2.5: Unix kernel components




Another example is QNX. QNX is a real-time operating system that is also based on the
microkernel design. The QNX microkernel provides services for message passing and
process scheduling. It also handled low-level network communication and hardware
interrupts. All other services in QNX are provided by standard processes that run outside
the kernel in user mode.

Unfortunately, microkernels can suffer from performance decreases due to increased
system function overhead. Consider the history of Windows NT. The first release had a
layered microkernels organization. However, this version delivered low performance
compared with that of Windows 95. Windows NT 4.0 partially redressed the performance
problem by moving layers from user space to kernel space and integrating them more
closely. By the time Windows XP was designed, its architecture was more monolithic
than microkernel.

Self Assessment Questions

   1.   What are the components of UNIX Kernel?

   2.    Under what circumstances a Micro-kernel may suffer from performance
   decrease?
Modules

Perhaps the best current methodology for operating-system design involves using object-
oriented programming techniques to create a modular kernel. Here, the kernel has a set of
core components and dynamically links in additional services either during boot time or
during run time. Such a strategy uses dynamically loadable modules and is common in
modern implementations of UNIX, such as Solaris, Linux and MacOSX. For example,
the Solaris operating system structure is organized around a core kernel with seven types
of loadable kernel modules:

   1.   Scheduling classes
   2.   File systems
   3.   Loadable system calls
   4.   Executable formats
   5.   STREAMS formats
   6.   Miscellaneous
   7.   Device and bus drivers

        Such a design allow the kernel to provide core services yet also allows certain
        features to be implemented dynamically. For example device and bus drivers for
        specific hardware can be added to the kernel, and support for different file
        systems can be added as loadable modules. The overall result resembles a layered
        system in that each kernel section has defined, protected interfaces; but it is more
        flexible than a layered system in that any module can call any other module.
        Furthermore, the approach is like the microkernel approach in that the primary
        module has only core functions and knowledge of how to load and communicate
        with other modules; but it is more efficient, because modules do not need to
        invoke message passing in order to communicate.

        Self Assessment Questions

           1. Which strategy uses dynamically loadable modules and is common in
           modern implementations of UNIX?

           2. What are different loadable modules based on which the Solaris operating
           system structure is organized around a core kernel?



Introduction to Virtual Machine

The layered approach of operating systems is taken to its logical conclusion in the
concept of virtual machine. The fundamental idea behind a virtual machine is to abstract
the hardware of a single computer (the CPU, Memory, Disk drives, Network Interface
Cards, and so forth) into several different execution environments and thereby creating
the illusion that each separate execution environment is running its own private
computer. By using CPU Scheduling and Virtual Memory techniques, an operating
system can create the illusion that a process has its own processor with its own (virtual)
memory. Normally a process has additional features, such as system calls and a file
system, which are not provided by the hardware. The Virtual machine approach does not
provide any such additional functionality but rather an interface that is identical to the
underlying bare hardware. Each process is provided with a (virtual) copy of the
underlying computer.

Hardware Virtual machine

The original meaning of virtual machine, sometimes called a hardware virtual
machine, is that of a number of discrete identical execution environments on a single
computer, each of which runs an operating system (OS). This can allow applications
written for one OS to be executed on a machine which runs a different OS, or provide
execution “sandboxes” which provide a greater level of isolation between processes than
is achieved when running multiple processes on the same instance of an OS. One use is to
provide multiple users the illusion of having an entire computer, one that is their
“private” machine, isolated from other users, all on a single physical machine. Another
advantage is that booting and restarting a virtual machine can be much faster than with a
physical machine, since it may be possible to skip tasks such as hardware initialization.

Such software is now often referred to with the terms virtualization and virtual servers.
The host software which provides this capability is often referred to as a virtual machine
monitor or hypervisor.

Software virtualization can be done in three major ways:· Emulation, full system
simulation, or “full virtualization with dynamic recompilation” — the virtual machine
simulates the complete hardware, allowing an unmodified OS for a completely different
CPU to be run.· Paravirtualization — the virtual machine does not simulate hardware but
instead offers a special API that requires OS modifications. An example of this is
XenSource’s XenEnterprise (www.xensource.com)· Native virtualization and “full
virtualization” — the virtual machine only partially simulates enough hardware to allow
an unmodified OS to be run in isolation, but the guest OS must be designed for the same
type of CPU. The term native virtualization is also sometimes used to designate that
hardware assistance through Virtualization Technology is used.




Application virtual machine

Another meaning of virtual machine is a piece of computer software that isolates the
application being used by the user from the computer. Because versions of the virtual
machine are written for various computer platforms, any application written for the
virtual machine can be operated on any of the platforms, instead of having to produce
separate versions of the application for each computer and operating system. The
application is run on the computer using an interpreter or Just In Time compilation. One
of the best known examples of an application virtual machine is Sun Microsystem’s Java
Virtual Machine.

Self Assessment Questions

1.   What do you mean by a Virtual Machine?

2.   Differentiate Hardware Virtual Machines and Software Virtual Machines.

Virtual Environment

A virtual environment (otherwise referred to as Virtual private server) is another kind of
a virtual machine. In fact, it is a virtualized environment for running user-level programs
(i.e. not the operating system kernel and drivers, but applications). Virtual environments
are created using the software implementing operating system-level virtualization
approach, such as Virtuozzo, FreeBSD Jails, Linux-VServer, Solaris Containers, chroot
jail and OpenVZ.

Machine Aggregation

A less common use of the term is to refer to a computer cluster consisting of many
computers that have been aggregated together as a larger and more powerful “virtual”
machine. In this case, the software allows a single environment to be created spanning
multiple computers, so that the end user appears to be using only one computer rather
than several.

PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) are two common
software packages that permit a heterogeneous collection of networked UNIX and/or
Windows computers to be used as a single, large, parallel computer. Thus large
computational problems can be solved more cost effectively by using the aggregate
power and memory of many computers than with a traditional supercomputer. The Plan9
Operating System from Bell Labs uses this approach.

Boston Circuits had released the gCore (grid-on-chip) Central Processing Unit (CPU)
with 16 ARC 750D cores and a Time-machine hardware module to provide a virtual
machine that uses this approach.



Self Assessment Questions

1.   What is Virtual Environment?
2.   Explain Machine Aggregation.

Implementation Techniques

Emulation of the underlying raw hardware (native execution)

This approach is described as full virtualization of the hardware, and can be implemented
using a Type 1 or Type 2 hypervisor. (A Type 1 hypervisor runs directly on the hardware;
a Type 2 hypervisor runs on another operating system, such as Linux.) Each virtual
machine can run any operating system supported by the underlying hardware. Users can
thus run two or more different “guest” operating systems simultaneously, in separate
“private” virtual computers.

The pioneer system using this concept was IBM’s CP-40, the first (1967) version of
IBM’s CP/CMS (1967-1972) and the precursor to IBM’s VM family (1972-present).
With the VM architecture, most users run a relatively simple interactive computing
single-user operating system, CMS, as a “guest” on top of the VM control program (VM-
CP). This approach kept the CMS design simple, as if it were running alone; the control
program quietly provides multitasking and resource management services “behind the
scenes”. In addition to CMS, VM users can run any of the other IBM operating systems,
such as MVS or z/OS. z/VM is the current version of VM, and is used to support
hundreds or thousands of virtual machines on a given mainframe. Some installations use
Linux for zSeries to run Web servers, where Linux runs as the operating system within
many virtual machines.

Full virtualization is particularly helpful in operating system development, when
experimental new code can be run at the same time as older, more stable, versions, each
in separate virtual machines. (The process can even be recursive: IBM debugged new
versions of its virtual machine operating system, VM, in a virtual machine running under
an older version of VM, and even used this technique to simulate new hardware.)

The x86 processor architecture as used in modern PCs does not actually meet the Popek
and Goldberg virtualization requirements. Notably, there is no execution mode where all
sensitive machine instructions always trap, which would allow per-instruction
virtualization.

Despite these limitations, several software packages have managed to provide
virtualization on the x86 architecture, even though dynamic recompilation of privileged
code, as first implemented by VMware, incurs some performance overhead as compared
to a VM running on a natively virtualizable architecture such as the IBM System/370 or
Motorola MC68020. By now, several other software packages such as Virtual PC,
VirtualBox, Parallels Workstation and Virtual Iron manage to implement virtualization
on x86 hardware.
On the other hand, plex86 can run only Linux under Linux using a specific patched
kernel. It does not emulate a processor, but uses bochs for emulation of motherboard
devices.

Intel and AMD have introduced features to their x86 processors to enable virtualization
in hardware.



Emulation of a non-native system

Virtual machines can also perform the role of an emulator, allowing software applications
and operating systems written for computer processor architecture to be run.

Some virtual machines emulate hardware that only exists as a detailed specification. For
example:

     •   One of the first was the p-code machine specification, which allowed
         programmers to write Pascal programs that would run on any computer running
         virtual machine software that correctly implemented the specification.
     •   The specification of the Java virtual machine.
     •   The Common Language Infrastructure virtual machine at the heart of the
         Microsoft .NET initiative.
     •   Open Firmware allows plug-in hardware to include boot-time diagnostics,
         configuration code, and device drivers that will run on any kind of CPU.

This technique allows diverse computers to run any software written to that specification;
only the virtual machine software itself must be written separately for each type of
computer on which it runs.

Self Assessment Questions

1.   What are the techniques to realize Virtual Machines concept?

2.   What are the advantages of Virtual Machines?

Operating system-level virtualization

Operating System-level Virtualization is a server virtualization technology which
virtualizes servers on an operating system (kernel) layer. It can be thought of as
partitioning: a single physical server is sliced into multiple small partitions (otherwise
called virtual environments (VE), virtual private servers (VPS), guests, zones etc); each
such partition looks and feels like a real server, from the point of view of its users.

The operating system level architecture has low overhead that helps to maximize efficient
use of server resources. The virtualization introduces only a negligible overhead and
allows running hundreds of virtual private servers on a single physical server. In contrast,
approaches such as virtualisation (like VMware) and paravirtualization (like Xen or
UML) cannot achieve such level of density, due to overhead of running multiple kernels.
From the other side, operating system-level virtualization does not allow running
different operating systems (i.e. different kernels), although different libraries,
distributions etc. are possible

Self Assessment Questions

1.   Describe the Operating System Level Virtualization.

Summary

The virtual machine concept has several advantages. In this environment, there is
complete protection of the various system resources. Each virtual machine is completely
isolated from all other virtual machines, so there are no protection problems. At the same
time, however, there is no direct sharing of resources. Two approaches to provide sharing
have been implemented. A virtual machine is a perfect vehicle for operating systems
research and development.

Operating system as extended machine acts as interface between hardware and user
application programs. The kernel is the essential center of a computer operating system,
i.e. the core that provides basic services for all other parts of the operating system. It
includes interrupts handler, scheduler, operating system address space manager, etc.

In the layered type architecture of operating systems, the components of kernel are built
as layers on one another, and each layer can interact with its neighbor through interface.
Whereas in micro-kernel architecture, most of these components are not part of kernel but
acts as another layer to the kernel, and the kernel comprises of essential and basic
components.



Terminal Questions

     1. Explain operating system as extended machine.
     2. What is a kernel? What are the main components of a kernel?
     3. Explain monolithic type of kernel architecture in brief.
     4. What is a micro-kernel? Describe its architecture.
     5. Compare micro-kernel with layered architecture of operating system.
     6. Describe UNIX kernel components in brief.
     7. What are the components of operating system?
     8. Explain the responsibilities of operating system as process management.
     9. Explain the function of operating system as file management.
     10. What are different services provided by an operating system?
Unit 3: Process Management :

This unit covers the process management and threads. Brief about the process creation,
termination, process state and process control. Discussed about the process Vs
Threads, Types of threads etc.



Introduction

This unit discuss the definition of process, process creation, process termination, process
state, and process control. And also deals with the threads and thread types.

A process can be simply defined as a program in execution. Process along with program
code, comprises of program counter value, Processor register contents, values of
variables, stack and program data.

A process is created and terminated, and it follows some or all of the states of process
transition; such as New, Ready, Running, Waiting, and Exit.

A thread is a single sequence stream within in a process. Because threads have some of
the properties of processes, they are sometimes called lightweight processes. There are
two types of threads: user level threads (ULT) and kernel level threads (KLT), user level
threads are mostly used on the systems where the operating system does not support
threads, but also can be combined with the kernel level threads. Threads also have similar
properties like processes e.g. execution states, context switch etc.



Objectives :

At the end of this unit, you will be able to understand the : What is a Process?



  Process Creation , Process Termination,

  Process States, Process Control

  Threads

  Types of Threads
What is a Process?

The notion of process is central to the understanding of operating systems. The term
process is used somewhat interchangeably with ‘task’ or ‘job’. There are quite a few
definitions presented in the literature, for instance A program in Execution.
  An                               asynchronous                              activity.
  The        entity        to       which        processors       are       assigned.
  The ‘dispatchable’ unit.



And many more, but the definition “Program in Execution” seem to be most frequently
used. And this is a concept we will use in the present study of operating systems.

Now that we agreed upon the definition of process, the question is, what is the relation
between process and program, or is it same with different name or when the process is
sleeping (not executing) it is called program and when it is executing becomes process.

Well, to be very precise. Process is not the same as program. A process is more than a
program code. A process is an ‘active’ entity as oppose to program which considered
being a ‘passive’ entity. As we all know that a program is an algorithm expressed in some
programming language. Being a passive, a program is only a part of process. Process, on
the other hand, includes:

  Current           value            of       Program           Counter      (PC)
  Contents               of               the          processors        registers
  Value                        of                    the                 variables
  The process stack, which typically contains temporary data such as subroutine
parameter,         return         address,     and         temporary    variables.
  A        data         section         that    contains       global   variables.
  A process is the unit of work in a system.

In Process model, all software on the computer is organized into a number of sequential
processes. A process includes PC, registers, and variables. Conceptually, each process
has its own virtual CPU. In reality, the CPU switches back and forth among processes.

Process Creation

In general-purpose systems, some way is needed to create processes as needed during
operation. There are four principal events led to processes creation. System
initialization.
  Execution of a process Creation System call by a running process.
  A          user        request  to       create        a       new        process.
  Initialization of a batch job.
Foreground processes interact with users. Background processes that stay in background
sleeping but suddenly springing to life to handle activity such as email, webpage,
printing, and so on. Background processes are called daemons. This call creates an exact
clone of the calling process.

A process may create a new process by executing system call ‘fork’ in UNIX. Creating
process is called parent process and the created one is called the child processes. Only
one parent is needed to create a child process. This creation of process (processes) yields
a hierarchical structure of processes. Note that each child has only one parent but each
parent may have many children. After the fork, the two processes, the parent and the
child, initially have the same memory image, the same environment strings and the same
open files. After a process is created, both the parent and child have their own distinct
address space.

Following are some reasons for creation of a process

   1.   User logs on.

   2.   User starts a program.

   3.   Operating systems creates process to provide service, e.g., to manage printer.

   4.   Some program starts another process.

Creation of a process involves following steps:

   1. Assign a unique process identifier to the new process, followed by making new
   entry in to the process table regarding this process.

   2. Allocate space for the process: this operating involves finding how much space is
   needed by the process and allocating space to the parts of the process such as user
   program, user data, stack and process attributes. The requirement of the space can be
   taken by default based on the type of the process, or from the parent process if the
   process is spawned by another process.

   3. Initialize Process Control Block: the PCB contains various attributes required to
   execute and control a process, such as process identification, processor status
   information and control information. This can be initialized to standard default values
   plus attributes that have been requested for this process.

   4. Set the appropriate linkages: the operating system maintains various queues
   related to a process in the form of linked lists, the newly created process should be
   attached to one of such queues.
5. Create or expand other data structures: depending on the implementation, an
   operating system may need to create some data structures for this process, for
   example to maintain accounting file for billing or performance assessment.

Process Termination

A process terminates when it finishes executing its last statement. Its resources are
returned to the system, it is purged from any system lists or tables, and its process control
block (PCB) is erased i.e., the PCB’s memory space is returned to a free memory pool.
The new process terminates the existing process, usually due to following reasons:

   •   Normal Exit Most processes terminates because they have done their job. This
       call is exit in UNIX.
   •   Error Exit When process discovers a fatal error. For example, a user tries to
       compile a program that does not exist.
   •   Fatal Error An error caused by process due to a bug in program for example,
       executing an illegal instruction, referring non-existing memory or dividing by
       zero.
   •   Killed by another Process A process executes a system call telling the
       Operating Systems to terminate some other process.

Process States

A process goes through a series of discrete process states during its lifetime. Depending
on the implementation, the operating systems may differ in the number of states a process
goes though. Though there are various state models starting from two states to nine states,
we will first see a five states model and then seven states model, as lower states models
are now obsolete.

Five State Process Model

Following are the states of a five state process model. The figure 3.1 show these state
transition.

   •   New State The process being created.
   •   Terminated State The process has finished execution.
•   Blocked (waiting) State When a process blocks, it does so because logically it
        cannot continue, typically because it is waiting for input that is not yet available.
        Formally, a process is said to be blocked if it is waiting for some event to happen
        (such as an I/O completion) before it can proceed. In this state a process is unable
        to run until some external event happens.
    •   Running State A process is said to be running if it currently has the CPU,
        which is, actually using the CPU at that particular instant.
    •   Ready State A process is said to be ready if it use a CPU if one were available.
        It is run-able but temporarily stopped to let another process run.

Logically, the ‘Running’ and ‘Ready’ states are similar. In both cases the process is
willing to run, only in the case of ‘Ready’ state, there is temporarily no CPU available for
it. The ‘Blocked’ state is different from the ‘Running’ and ‘Ready’ states in that the
process cannot run, even if the CPU is available.

Following are six possible transitions among above mentioned five states

Transition 1 occurs when process discovers that it cannot continue. If running process
initiates an I/O operation before its allotted time expires, the running process voluntarily
relinquishes the CPU.

This state transition is:

   Block (process): Running → Blocked.

Transition 2 occurs when the scheduler decides that the running process has run long
enough and it is time to let another process have CPU time.

This state transition is:
Time-Run-Out (process): Running → Ready.

Transition 3 occurs when all other processes have had their share and it is time for the
first process to run again

This state transition is:

  Dispatch (process): Ready → Running.

Transition 4 occurs when the external event for which a process was waiting (such as
arrival of input) happens.

This state transition is:

Wakeup (process): Blocked → Ready.

Transition 5 occurs when the process is created.

This state transition is:

  Admitted (process): New → Ready.

Transition 6 occurs when the process has finished execution.

This state transition is:

Exit (process): Running → Terminated.

Swapping

Many of the operating systems follow the above shown process model. However the
operating systems which does not employ virtual memory, the processor will be idle most
of the times considering the difference between speed of I/O and processor. There will be
many processes waiting for I/O in the memory, and exhausting the memory. If there is no
ready process to run; new processes can not be created as there is no memory available to
accommodate new process. Thus the processor has to wait till any of the waiting
processes become ready after completion of an I/O operation.

This problem can be solved by adding to more states in the above process model by using
swapping technique. Swapping involves moving part or all of a process from main
memory to disk. When none of the processes in main memory is in the ready state, the
operating system swaps one of the blocked processes out onto disk in to a suspend queue.
This is a queue of existing processes that have been temporarily shifted out of main
memory, or suspended. The operating system then either creates new process or brings a
swapped process from the disk which has become ready.
Seven State Process Model

The following figure 3.2 shows the seven state process model in which uses above
described swapping technique.




Apart from the transitions we have seen in five states model, following are the new
transitions which occur in the above seven state model.

   •   Blocked to Blocked / Suspend: If there are now ready processes in the main
       memory, at least one blocked process is swapped out to make room for another
       process that is not blocked.
   •   Blocked / Suspend to Blocked: If a process is terminated making space in the
       main memory, and if there is any high priority process which is blocked but
       suspended, anticipating that it will become free very soon, the process is brought
       in to the main memory.
   •   Blocked / Suspend to Ready / Suspend: A process is moved from Blocked /
       Suspend to Ready / Suspend, if the event occurs on which the process was
       waiting, as there is no space in the main memory.
   •   Ready / Suspend to Ready: If there are no ready processes in the main memory,
       operating system has to bring one in main memory to continue the execution.
       Some times this transition takes place even there are ready processes in main
       memory but having lower priority than one of the processes in Ready / Suspend
       state. So the high priority process is brought in the main memory.
   •   Ready to Ready / Suspend: Normally the blocked processes are suspended by
       the operating system but sometimes to make large block free, a ready process may
       be suspended. In this case normally the low priority processes are suspended.
•   New to Ready / Suspend: When a new process is created, it should be added to
       the Ready state. But some times sufficient memory may not be available to
       allocate to the newly created process. In this case, the new process is sifted to
       Ready / Suspend.

Process Control

In this section we will study structure of a process, process control block, modes of
process execution, and process switching.

Process Structure

After studying the process states now we will see where does the process reside, and what
is the physical manifestation of a process?

The location of the process depends on memory management scheme being used. In the
simplest case, a process is maintained in the secondary memory, and to manage this
process, at least small part of this process is maintained in the main memory. To execute
the process, the entire process or part of it is brought in the main memory, and for that the
operating system need to know the location of the process.


                          Process identification
                          Processor state information
                          Process control information
                          User Stack

                          Private user address space (program, data)

                          Shared address space

                                   Figure 3.3: Process Image




The obvious contents of a process are User Program to be executed, and the User
Data which is associated with that program. Apart from these there are two major parts
of a process; System Stack, which is used to store parameters and calling addresses for
procedure and system calls, and Process Control Block, this is nothing but collection of
process attributes needed by operating system to control a process. The collection of user
program, data, system stack, and process control block is called as Process Image as
shown in the figure 3.3 above.

Process Control Block
A process control block as shown in the figure 3.4 bellow, contains various attributes
required by operating system to control a process, such as process state, program counter,
CPU state, CPU scheduling information, memory management information, I/O state
information, etc.

These attributes can be grouped in to three general categories as follows: Process
identification
  Processor                           state                             information
  Process control information



The first category stores information related to Process identification, such as identifier
of the current process, identifier of the process which created this process, to maintain
parent-child process relationship, and user identifier, the identifier of the user on behalf
of who’s this process is being run.

The Processor state information consists of the contents of the processor registers, such
as user-visible registers, control and status registers which includes program counter and
program status word, and stack pointers.

The third category Process Control Identification is mainly required for the control of a
process. The information includes: scheduling and state information, data structuring,
inter-process communication, process privileges memory management, and resource
ownership and utilization.

                                  pointer   process state
                                  process number
                                  program counter
                                  registers
                                  memory limits
                                  list of open files
                                  .

                                  .

                                  .

                              Figure 3.4: Process Control Block




Modes of Execution
In order to ensure the correct execution of each process, an operating system must protect
each process’s private information (executable code, data, and stack) from uncontrolled
interferences from other processes. This is accomplished by suitably restricting the
memory address space available to a process for reading/writing, so that the OS can
regain CPU control through hardware-generated exceptions whenever a process violates
those restrictions.

Also the OS code needs to execute in a privileged condition with respect to “normal”: to
manage processes, it needs to be enabled to execute operations which are forbidden to
“normal” processes. Thus most of the processors support at least two modes of execution.
Certain instructions can only be executed in the more privileged mode. These include
reading or altering a control register such as program status word, primitive I/O
instruction; and memory management instructions.

The less privileged mode is referred as user mode as typically user programs are executed
in this mode, and the more privileged mode in which important operating system
functions are executed is called as kernel mode/ system mode or control mode.

The current mode information is stored in the PSW, i.e. whether the processor is running
in user mode or kernel mode. The mode change is normally done by executing change
mode instruction; typically after a user process invokes a system call, or whenever an
interrupt occurs, as these are operating system functions and needed to be executed in
privileged mode. After the completion of system call or interrupt routine, the mode is
again changed to user mode to continue the user process execution.

Context Switching

To give each process on a multiprogrammed machine a fair share of the CPU, a hardware
clock generates interrupts periodically. This allows the operating system to schedule all
processes in main memory (using scheduling algorithm) to run on the CPU at equal
intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time
the current running process has used. If it has used up its entire time slice, then the CPU
scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU
from one process to another is called a context switch.

A context is the contents of a CPU’s registers and program counter at any point in time.
Context switching can be described as the kernel (i.e., the core of the operating system)
performing the following activities with regard to processes on the CPU: (1) suspending
the progression of one process and storing the CPU’s state (i.e., the context) for that
process somewhere in memory, (2) retrieving the context of the next process from
memory and restoring it in the CPU’s registers and (3) returning to the location indicated
by the program counter (i.e., returning to the line of code at which the process was
interrupted) in order to resume the process. The figure 3.5 bellow depicts the process of
context switch from process P0 to process P1.
Figure 3.5: Process switching

Self Assessment Questions:

       1.   Discuss the process state with its five state process model.
       2.   Explain the seven state process model.
       3.   What is Process Control ? Discuss the process control block.
       4.   Write note on Context Switching.

A context switch is sometimes described as the kernel suspending execution of one
process on the CPU and resuming execution of some other process that had previously
been suspended.

A context switch occurs due to interrupts, trap (error due to the current instruction) or a
system call as described bellow:

   •   Clock interrupt: when a process has executed its current time quantum which
       was allocated to it, the process must be switched from running state to ready state,
       and another process must be dispatched for execution.
   •   I/O interrupt: whenever any I/O related event occurs, the OS is interrupted, the
       OS has to determine the reason of it and take necessary action for that event. Thus
       the current process is switched to ready state and the interrupt routine is loaded to
       do the action for the interrupt event (e.g. after an I/O interrupt the OS moves all
       the processes which were blocked on the event, from blocked state to ready state,
       and blocked/suspended to ready/suspended state). After completion of the
       interrupt related actions, it is expected that the process which was switched,
       should be brought for execution, but that does not happen. At this point the
scheduler again decides which process is to be scheduled for execution from all
       the ready processes afresh. This is important as it will schedule any high priority
       process present in the ready queue added during the interrupt handling period.
   •   Memory fault: when virtual memory technique is used for memory management,
       many a times it happens that a process refers to a memory address which is not
       present in the main memory, and needs to be brought in. As the memory block
       transfer takes time, another process should be given chance for execution and the
       current process should be blocked. Thus the OS blocks the current process, issues
       an I/O request to get the memory block in the memory and switches the current
       process to blocked state, and loads another process for execution.
   •   Trap: if the instruction being executed has any error or exception, depending on
       the criticalness of the error / exception and design of operating system, it may
       either move the process to exit state, or may execute the current process after a
       possible recovery.

System call: many a times a process has to invoke a system call for a privileged job, for
this the current process is blocked and the respective operating system’s system call code
is executed. Thus the context of the current process is switched to the system call code.

Example: UNIX Process

Let us see an example of UNIX System V, which makes use of a simple but powerful
process facility that is highly visible to the user. The following figure shows the model
followed by UNIX, in which most of the operating system executes within the
environment of a user process. Thus, two modes, user and kernel, are required. UNIX
uses two categories of processes: system processes and user processes. System processes
run in kernel mode and execute operating system code to perform administrative and
housekeeping functions, such as allocation of memory and process swapping. User
processes operate in user mode to execute user programs and utilities and in kernel mode
to execute instructions belong to the kernel. A user process enters kernel mode by issuing
a system call, when an exception (fault) is generated or when an interrupt occurs.
A total of nine process states are recognized by the UNIX operating system as explained
bellow

   •   User Running: Executing in user mode.
   •   Kernel Running: Executing in kernel mode.
   •   Ready to Run, in Memory: Ready to run as soon as the kernel schedules it.
   •   Asleep in Memory: Unable to execute until an event occurs; process is in main
       memory (a blocked state).
   •   Ready to Run, Swapped: Process is ready to run, but the swapper must swap the
       process into main memory before the kernel can schedule it to execute.
   •   Sleeping, Swapped: The process is awaiting an event and has been swapped to
       secondary storage (a blocked state).
   •   Preempted: Process is returning from kernel to user mode, but the kernel
       preempts it and does a process switch to schedule another process.
   •   Created: Process is newly created and not yet ready to run.
•   Zombie: Process no longer exists, but it leaves a record for its parent process to
       collect.

UNIX employs two Running states to indicate whether the process is executing in user
mode or kernel mode. A distinction is made between the two states: (Ready to Run, in
Memory) and (Preempted). These are essentially the same state, as indicated by the
dotted line joining them. The distinction is made to emphasize the way in which the
preempted state is entered. When a process is running in kernel mode (as a result of a
supervisor call, clock interrupt, or I/O interrupt), there will come a time when the kernel
has completed its work and is ready to return control to the user program. At this point,
the kernel may decide to preempt the current process in favor of one that is ready and of
higher priority. In that case, the current process moves to the preempted state. However,
for purposes of dispatching, those processes in the preempted state and those in the
Ready to Run, in Memory state form one queue.

Preemption can only occur when a process is about to move from kernel mode to user
mode. While a process is running in kernel mode, it may not be preempted. This makes
UNIX unsuitable for real-time processing.

Two processes are unique in UNIX. Process 0 is a special process that is created when
the system boots; in effect, it is predefined as a data structure loaded at boot time. It is the
swapper process. In addition, process 0 spawns process 1, referred to as the init process;
all other processes in the system have process 1 as an ancestor. When a new interactive
user logs onto the system, it is process 1 that creates a user process for that user.
Subsequently, the user process can create child processes in a branching tree, so that any
particular application can consist of a number of related processes.

Threads

A thread is a single sequence stream within in a process. Because threads have some of
the properties of processes, they are sometimes called lightweight processes. In a process,
threads allow multiple executions of streams. In many respect, threads are popular way to
improve application through parallelism. The CPU switches rapidly back and forth
among the threads giving illusion that the threads are running in parallel. Like a
traditional process i.e., process with one thread, a thread can be in any of several states
(Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread
will generally call different procedures and thus a different execution history. This is why
thread needs its own stack. An operating system that has thread facility, the basic unit of
CPU utilization is a thread. A thread has or consists of a program counter (PC), a register
set, and a stack space. Threads are not independent of one other like processes as a result
threads shares with other threads their code section, data section, OS resources also
known as task, such as open files and signals.

Processes Vs Threads
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses
EduProz provides free classroom facilities for distance learning courses

Más contenido relacionado

Similar a EduProz provides free classroom facilities for distance learning courses

E-Comura Documentation
E-Comura DocumentationE-Comura Documentation
E-Comura Documentationanuj_rakheja
 
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...Maurice Dawson
 
ABCD Open Source Software for managing ETD repositories
ABCD Open Source Software for managing ETD repositoriesABCD Open Source Software for managing ETD repositories
ABCD Open Source Software for managing ETD repositoriessangeetadhamdhere
 
Computer fundamentals & window based application
Computer fundamentals & window based applicationComputer fundamentals & window based application
Computer fundamentals & window based applicationedudivya
 
Office automation system report
Office automation system reportOffice automation system report
Office automation system reportAmit Kulkarni
 
Office automation system report
Office automation system reportOffice automation system report
Office automation system reportAmit Kulkarni
 
Computer fundamentals & window based application
Computer fundamentals & window based applicationComputer fundamentals & window based application
Computer fundamentals & window based applicationedudivya
 
Iadis2013 linti Integrando y Metadateando OER en cursos de informatica
Iadis2013 linti Integrando y Metadateando OER en cursos de informaticaIadis2013 linti Integrando y Metadateando OER en cursos de informatica
Iadis2013 linti Integrando y Metadateando OER en cursos de informaticaJavier Diaz
 
Introduction to java
Introduction to javaIntroduction to java
Introduction to javashwanjava
 
Demo Lecture 01 Notes.pptx by Sabki Kaksha
Demo Lecture 01 Notes.pptx by Sabki KakshaDemo Lecture 01 Notes.pptx by Sabki Kaksha
Demo Lecture 01 Notes.pptx by Sabki KakshaGandhiSarthak
 
Demo Lecture 01 Notes paid , course notes
Demo Lecture 01 Notes paid , course notesDemo Lecture 01 Notes paid , course notes
Demo Lecture 01 Notes paid , course notesGandhiSarthak
 

Similar a EduProz provides free classroom facilities for distance learning courses (20)

Bright copy
Bright   copyBright   copy
Bright copy
 
E-Comura Documentation
E-Comura DocumentationE-Comura Documentation
E-Comura Documentation
 
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...
TECHNOLOGY ENHANCED LEARNING WITH OPEN SOURCE SOFTWARE FOR SCIENTISTS AND ENG...
 
ABCD Open Source Software for managing ETD repositories
ABCD Open Source Software for managing ETD repositoriesABCD Open Source Software for managing ETD repositories
ABCD Open Source Software for managing ETD repositories
 
Computer fundamentals & window based application
Computer fundamentals & window based applicationComputer fundamentals & window based application
Computer fundamentals & window based application
 
Office automation system report
Office automation system reportOffice automation system report
Office automation system report
 
Office automation system report
Office automation system reportOffice automation system report
Office automation system report
 
Computer fundamentals & window based application
Computer fundamentals & window based applicationComputer fundamentals & window based application
Computer fundamentals & window based application
 
My report ppt
My report pptMy report ppt
My report ppt
 
eyeos technology
eyeos technologyeyeos technology
eyeos technology
 
Iadis2013 linti Integrando y Metadateando OER en cursos de informatica
Iadis2013 linti Integrando y Metadateando OER en cursos de informaticaIadis2013 linti Integrando y Metadateando OER en cursos de informatica
Iadis2013 linti Integrando y Metadateando OER en cursos de informatica
 
I_CLASS_ROOM.pptx
I_CLASS_ROOM.pptxI_CLASS_ROOM.pptx
I_CLASS_ROOM.pptx
 
Introduction to java
Introduction to javaIntroduction to java
Introduction to java
 
Demo Lecture 01 Notes.pptx by Sabki Kaksha
Demo Lecture 01 Notes.pptx by Sabki KakshaDemo Lecture 01 Notes.pptx by Sabki Kaksha
Demo Lecture 01 Notes.pptx by Sabki Kaksha
 
Demo Lecture 01 Notes paid , course notes
Demo Lecture 01 Notes paid , course notesDemo Lecture 01 Notes paid , course notes
Demo Lecture 01 Notes paid , course notes
 
Chapter no 1
Chapter no 1Chapter no 1
Chapter no 1
 
INTRODUCTION TO JAVA
INTRODUCTION TO JAVAINTRODUCTION TO JAVA
INTRODUCTION TO JAVA
 
Nate conference
Nate conferenceNate conference
Nate conference
 
C++123
C++123C++123
C++123
 
Abhilash_Documentum
Abhilash_Documentum Abhilash_Documentum
Abhilash_Documentum
 

Más de edudivya

Data structures
Data structuresData structures
Data structuresedudivya
 
Database management system
Database management systemDatabase management system
Database management systemedudivya
 
Computer system and peripherals
Computer system and peripheralsComputer system and peripherals
Computer system and peripheralsedudivya
 
Operating system 1
Operating system 1Operating system 1
Operating system 1edudivya
 
Database management system
Database management systemDatabase management system
Database management systemedudivya
 
Computer system and peripherals
Computer system and peripheralsComputer system and peripherals
Computer system and peripheralsedudivya
 
Data structures
Data structuresData structures
Data structuresedudivya
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in englishedudivya
 
Database management system
Database management systemDatabase management system
Database management systemedudivya
 
Computer organisation nd architecture
Computer organisation nd architectureComputer organisation nd architecture
Computer organisation nd architectureedudivya
 
Managerial economics
Managerial economicsManagerial economics
Managerial economicsedudivya
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in englishedudivya
 
Result online june july 11
Result online june july 11Result online june july 11
Result online june july 11edudivya
 
Ksou need result june july exam session
Ksou need result june july exam sessionKsou need result june july exam session
Ksou need result june july exam sessionedudivya
 

Más de edudivya (20)

Book
BookBook
Book
 
Book
BookBook
Book
 
Book
BookBook
Book
 
Book
BookBook
Book
 
Data structures
Data structuresData structures
Data structures
 
Database management system
Database management systemDatabase management system
Database management system
 
Computer system and peripherals
Computer system and peripheralsComputer system and peripherals
Computer system and peripherals
 
Operating system 1
Operating system 1Operating system 1
Operating system 1
 
Database management system
Database management systemDatabase management system
Database management system
 
Computer system and peripherals
Computer system and peripheralsComputer system and peripherals
Computer system and peripherals
 
Data structures
Data structuresData structures
Data structures
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in english
 
Maths
MathsMaths
Maths
 
Database management system
Database management systemDatabase management system
Database management system
 
Computer organisation nd architecture
Computer organisation nd architectureComputer organisation nd architecture
Computer organisation nd architecture
 
Marketing
MarketingMarketing
Marketing
 
Managerial economics
Managerial economicsManagerial economics
Managerial economics
 
Communication skills in english
Communication skills in englishCommunication skills in english
Communication skills in english
 
Result online june july 11
Result online june july 11Result online june july 11
Result online june july 11
 
Ksou need result june july exam session
Ksou need result june july exam sessionKsou need result june july exam session
Ksou need result june july exam session
 

Último

Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 

Último (20)

Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 

EduProz provides free classroom facilities for distance learning courses

  • 1. E-528-529, sector-7, Dwarka, New delhi-110075 (Nr. Ramphal chowk and Sector 9 metro station) Ph. 011-47350606, (M) 7838010301-04 www.eduproz.in Educate Anytime...Anywhere... "Greetings For The Day" About Eduproz We, at EduProz, started our voyage with a dream of making higher education available for everyone. Since its inception, EduProz has been working as a stepping-stone for the students coming from varied backgrounds. The best part is – the classroom for distance learning or correspondence courses for both management (MBA and BBA) and Information Technology (MCA and BCA) streams are free of cost. Experienced faculty-members, a state-of-the-art infrastructure and a congenial environment for learning - are the few things that we offer to our students. Our panel of industrial experts, coming from various industrial domains, lead students not only to secure good marks in examination, but also to get an edge over others in their professional lives. Our study materials are sufficient to keep students abreast of the present nuances of the industry. In addition, we give importance to regular tests and sessions to evaluate our students’ progress. Students can attend regular classes of distance learning MBA, BBA, MCA and BCA courses at EduProz without paying anything extra. Our centrally air-conditioned classrooms, well-maintained library and well- equipped laboratory facilities provide a comfortable environment for learning. Honing specific skills is inevitable to get success in an interview. Keeping this in mind, EduProz has a career counselling and career development cell where we help student to prepare for interviews. Our dedicated placement cell has been helping students to land in their dream jobs on completion of the course. EduProz is strategically located in Dwarka, West Delhi (walking distance from Dwarka Sector 9 Metro Station and 4-minutes drive from the national highway); students can easily come to our centre from anywhere Delhi and neighbouring Gurgaon, Haryana and avail of a quality-oriented education facility at apparently no extra cost. Why Choose Edu Proz for distance learning? • Edu Proz provides class room facilities free of cost. • In EduProz Class room teaching is conducted through experienced faculty. • Class rooms are spacious fully air-conditioned ensuring comfortable ambience. • Course free is not wearily expensive. • Placement assistance and student counseling facilities. • Edu Proz unlike several other distance learning courses strives to help and motivate pupils to get
  • 2. high grades thus ensuring that they are well placed in life. • Students are groomed and prepared to face interview boards. • Mock tests, unit tests and examinations are held to evaluate progress. • Special care is taken in the personality development department. "HAVE A GOOD DAY" Karnataka State Open University (KSOU) was established on 1st June 1996 with the assent of H.E. Governor of Karnataka as a full fledged University in the academic year 1996 vide Government notification No/EDI/UOV/dated 12th February 1996 (Karnataka State Open University Act – 1992). The act was promulgated with the object to incorporate an Open University at the State level for the introduction and promotion of Open University and Distance Education systems in the education pattern of the State and the country for the Co-ordination and determination of standard of such systems. Keeping in view the educational needs of our country, in general, and state in particular the policies and programmes have been geared to cater to the needy. Karnataka State Open University is a UGC recognised University of Distance Education Council (DEC), New Delhi, regular member of the Association of Indian Universities (AIU), Delhi, permanent member of Association of Commonwealth Universities (ACU), London, UK, Asian Association of Open Universities (AAOU), Beijing, China, and also has association with Commonwealth of Learning (COL). Karnataka State Open University is situated at the North–Western end of the Manasagangotri campus, Mysore. The campus, which is about 5 kms, from the city centre, has a serene atmosphere ideally suited for academic pursuits. The University houses at present the Administrative Office, Academic Block, Lecture Halls, a well-equipped Library, Guest House Cottages, a Moderate Canteen, Girls Hostel and a few cottages providing limited accommodation to students coming to Mysore for attending the Contact Programmes or Term-end examinations.
  • 3. Unit 1: Overview of the Operating Systems: This unit covers introduction, evolution of OS. And also covers the OS components and its services. Introduction to Operating Systems Programs, Code files, Processes and Threads • A sequence of instructions telling the computer what to do is called a program. The user normally uses a text editor to write their program in a high level language, such as Pascal, C, Java, etc. Alternatively, they may write it in assembly language. Assembly language is a computer language whose statements have an almost one to one correspondence to the instructions understood by the CPU of the computer. It provides a way of specifying in precise detail what machine code the assembler should create. A compiler is used to translate a high level language program into assembly language or machine code, and an assembler is used to translate an assembly language program into machine code. A linker is used to combine relocatable object files (code files corresponding to incomplete portions of a program) into executable code files (complete code files, for which the addresses have been resolved for all global functions and variables). The text for a program written in a high level language or assembly language is normally saved in a source file on disk. Machine code for a program is normally saved in a code file on disk. The machine code is loaded into the virtual memory for a process, when the process attempts to execute the program. The notion of a program is becoming more complex nowadays, because of shared libraries. In the old days, the user code for a process was all in one file. However, with GUI libraries becoming so large, this is no longer possible. Library code is now stored in memory that is shared by all processes that use it. Perhaps it is best to use the term program for the machine code stored in or derived from a single code file. Code files contain more than just machine code. On UNIX, a code file starts with a header, containing information on the position and size of the code (”text”), initialised data, and uninitialised data segments of the code file. The header also contains other information, such as the initial value to give the program counter (the “entry point”) and global pointer register. The data for the code and initialised data segments then follows.
  • 4. As well the above information, code files can contain a symbol table – a table indicating the names of all functions and global variables, and the virtual addresses they correspond to. The symbol table is used by the linker, when it combines several relocatable object files into a single executable code file, to resolve references to functions in shared libraries. The symbol table is also used for debugging. The structure of UNIX code files on the Alpha is very complex, due to the use of shared libraries. • When a user types in the name of a command in the UNIX shell, this results in the creation of what is called a process. On any large computer, especially one with more than one person using it at the same time, there are normally many processes executing at any given time. Under UNIX, every time a user types in a command, they create a separate process. If several users execute the same command, then each one creates a different process. The Macintosh is a little different from UNIX. If the user double clicks on several data files for an application, only one process is created, and this process manages all the data files. A process is the virtual memory, and information on open files, and other operating system resources, shared by its threads of execution, all executing in the same virtual memory. The threads in a process execute not only the code from a user program. They can also execute the shared library code, operating system kernel code, and (on the Alpha) what is called PALcode. A process is created to execute a command. The code file for the command is used to initialise the virtual memory containing the user code and global variables. The user stack for the initial thread is cleared, and the parameters to the command are passed as parameters to the main function of the program. Files are opened corresponding to the standard input and output (keyboard and screen, unless file redirection is used). When a process is created, it is created with a single thread of execution. Conventional processes never have more than a single thread of execution, but multi-threaded processes are now becoming common place. We often speak about a program executing, or a process executing a program, when we really mean a thread within the process executes the program. In UNIX, a new process executing a new program is created by the fork() system call (which creates an almost identical copy of an existing process, executing the same program), followed by the exec() system call (which replaces the program being executed by the new program).
  • 5. In the Java programming language, a new process executing a new program is created by the exec() method in the Runtime class. The Java exec() is probably implemented as a combination of the UNIX fork() and exec() system calls. • A thread is an instance of execution (the entity that executes). All the threads that make up a process share access to the same user program, virtual memory, open files, and other operating system resources. Each thread has its own program counter, general purpose registers, and user and kernel stack. The program counter and general purpose registers for a thread are stored in the CPU when the thread is executing, and saved away in memory when it is not executing. The Java programming language supports the creation of multiple threads. To create a thread in Java, we create an object that implements the Runnable interface (has a run() method), and use this to create a new Thread object. To initiate the execution of the thread, we invoke the start() method of the thread, which invokes the run() method of the Runnable object. The threads that make up a process need to use some kind of synchronisation mechanism to avoid more than one thread accessing shared data at the same time. In Java, synchronisation is done by synchronised methods. The wait(), notifyO, and notifyAU() methods in the Object class are used to allow a thread to wait until the data has been updated by another thread, and to notify other threads when the data has been altered. In UNIX C, the pthreads library contains functions to create new threads, and provide the equivalent of synchronised methods, waitO, notifyO, etc. The Java mechanism is in fact based on the pthreads library. In Java, synchronisation is built into the design of the language (the compiler knows about synchronised methods). In C, there is no syntax to specify that a function (method) is synchronised, and the programmer has to explicitly put in code at the start and end of the method to gain and relinquish exclusive access to a data structure. Some people call threads lightweight processes, and processes heavyweight processes. Some people call processes tasks. Many application programs, such as Microsoft word, are starting to make use of multiple threads. For example, there is a thread that processes the input, and a thread for doing repagination in the background. A compiler could have multiple threads, one for lexical analysis, one for parsing, one for analysing the abstract syntax tree. These can all execute in parallel, although the parser cannot execute ahead of the lexical analyser, and the abstract syntax tree analyser can only process the portion of the abstract syntax tree already generated by the parser. The code for performing graphics can easily be sped up by having multiple threads, each painting a portion of the screen. File and network servers have to deal with multiple external requests, many of which block before the reply is given. An elegant way of programming servers is to have a thread for each request.
  • 6. Multi-threaded processes are becoming very important, because computers with multiple processors are becoming commonplace, as are distributed systems, and servers. It is important that you learn how to program in this manner. Multi-threaded programming, particularly dealing with synchronisation issues, is not trivial, and a good conceptual understanding of synchronisation is essential. Synchronisation is dealt with fully in the stage 3 operating systems paper. Objectives An operating system can be thought of as having three objectives: Convenience: An operating system makes a computer more convenient to use. Efficiency: An operating system allows the computer system resources to be used in an efficient manner. Ability to evolve: An operating system should be constructed in such a way as to permit the effective development, testing and introduction of new system functions without interfering with current services provided. What is an Operating System? An operating system (OS) is a program that controls the execution of an application program and acts as an interface between the user and computer hardware. The purpose of an OS is to provide an environment in which a user can execute programs in a convenient and efficient manner. The operating system must provide certain services to programs and to the users of those programs in order to make the programming task easier, these services will differ from one OS to another. Functions of an Operating System Modern Operating systems generally have following three major goals. Operating systems generally accomplish these goals by running processes in low privilege and providing service calls that invoke the operating system kernel in high-privilege state. To hide details of hardware An abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction.
  • 7. First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations. Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks. Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction. Fourth, the operating system can enforce security through abstraction. Resources Management An operating system as resource manager, controls how processes (the active agents) may access resources (passive entities). One can view Operating Systems from two points of views: Resource manager and Extended machines. Form Resource manager point of view Operating Systems manage the different parts of the system efficiently and from extended machines point of view Operating Systems provide a virtual machine to users that is more convenient to use. The structurally Operating Systems can be design as a monolithic system, a hierarchy of layers, a virtual machine system, a micro-kernel, or using the client-server model. The basic concepts of Operating Systems are processes, memory management, I/O management, the file systems, and security. Provide a effective user interface The user interacts with the operating systems through the user interface and usually interested in the look and feel of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers. Evolution of Operating System Operating system and computer architecture have had a great deal of influence on each other. To facilitate the use of the hardware, OS’s were developed. As operating systems were designed and used, it became obvious that changes in the design of the hardware could simplify them. Early Systems In the earliest days of electronic digital computing, everything was done on the bare hardware. Very few computers existed and those that did exist were experimental in
  • 8. nature. The researchers who were making the first computers were also the programmers and the users. They worked directly on the “bare hardware”. There was no operating system. The experimenters wrote their programs in assembly language and a running program had complete control of the entire computer. Debugging consisted of a combination of fixing both the software and hardware, rewriting the object code and changing the actual computer itself. The lack of any operating system meant that only one person could use a computer at a time. Even in the research lab, there were many researchers competing for limited computing time. The first solution was a reservation system, with researchers signing up for specific time slots. The high cost of early computers meant that it was essential that the rare computers be used as efficiently as possible. The reservation system was not particularly efficient. If a researcher finished work early, the computer sat idle until the next time slot. If the researcher’s time ran out, the researcher might have to pack up his or her work in an incomplete state at an awkward moment to make room for the next researcher. Even when things were going well, a lot of the time the computer actually sat idle while the researcher studied the results (or studied memory of a crashed program to figure out what went wrong). The solution to this problem was to have programmers prepare their work off-line on some input medium (often on punched cards, paper tape, or magnetic tape) and then hand the work to a computer operator. The computer operator would load up jobs in the order received (with priority overrides based on politics and other factors). Each job still ran one at a time with complete control of the computer, but as soon as a job finished, the operator would transfer the results to some output medium (punched tape, paper tape, magnetic tape, or printed paper) and deliver the results to the appropriate programmer. If the program ran to completion, the result would be some end data. If the program crashed, memory would be transferred to some output medium for the programmer to study (because some of the early business computing systems used magnetic core memory, these became known as “core dumps”) Soon after the first successes with digital computer experiments, computers moved out of the lab and into practical use. The first practical application of these experimental digital computers was the generation of artillery tables for the British and American armies. Much of the early research in computers was paid for by the British and American militaries. Business and scientific applications followed. As computer use increased, programmers noticed that they were duplicating the same efforts. Every programmer was writing his or her own routines for I/O, such as reading input from a magnetic tape or writing output to a line printer. It made sense to write a common device driver for each input or output device and then have every programmer share the same device drivers rather than each programmer writing his or her own. Some
  • 9. programmers resisted the use of common device drivers in the belief that they could write “more efficient” or faster or “”better” device drivers of their own. Additionally each programmer was writing his or her own routines for fairly common and repeated functionality, such as mathematics or string functions. Again, it made sense to share the work instead of everyone repeatedly “reinventing the wheel”. These shared functions would be organized into libraries and could be inserted into programs as needed. In the spirit of cooperation among early researchers, these library functions were published and distributed for free, an early example of the power of the open source approach to software development. Simple Batch Systems When punched cards were used for user jobs, processing of a job involved physical actions by the system operator, e.g., loading a deck of cards into the card reader, pressing switches on the computer’s console to initiate a job, etc. These actions wasted a lot of central processing unit (CPU) time. Operating System User Program Area Figure 1.1: Simple Batch System To speed up processing, jobs with similar needs were batched together and were run as a group. Batch processing (BP) was implemented by locating a component of the BP system, called the batch monitor or supervisor, permanently in one part of computer’s memory. The remaining memory was used to process a user job – the current job in the batch as shown in the figure 1.1 above. The delay between job submission and completion was considerable in batch processed system as a number of programs were put in a batch and the entire batch had to be processed before the results were printed. Further card reading and printing were slow as they used slower mechanical units compared to CPU which was electronic. The speed mismatch was of the order of 1000. To alleviate this problem programs were spooled. Spool is an acronym for simultaneous peripheral operation on-line. In essence the idea was to use a cheaper processor known as peripheral processing unit (PPU) to read programs and data from cards store them on a disk. The faster CPU read programs/data from the disk processed them and wrote the results back on the disk. The cheaper processor then read the results from the disk and printed them. Multi Programmed Batch Systems Even though disks are faster than card reader/ printer they are still two orders of magnitude slower than CPU. It is thus useful to have several programs ready to run waiting in the main memory of CPU. When one program needs input/output (I/O) from
  • 10. disk it is suspended and another program whose data is already in main memory (as shown in the figure 1.2 bellow) is taken up for execution. This is called multiprogramming. Operating System Program 1 Program 2 Program 3 Program 4 Figure 1.2: Multi Programmed Batch Systems Multiprogramming (MP) increases CPU utilization by organizing jobs such that the CPU always has a job to execute. Multiprogramming is the first instance where the operating system must make decisions for the user. The MP arrangement ensures concurrent operation of the CPU and the I/O subsystem. It ensures that the CPU is allocated to a program only when it is not performing an I/O operation. Time Sharing Systems Multiprogramming features were superimposed on BP to ensure good utilization of CPU but from the point of view of a user the service was poor as the response time, i.e., the time elapsed between submitting a job and getting the results was unacceptably high. Development of interactive terminals changed the scenario. Computation became an on- line activity. A user could provide inputs to a computation from a terminal and could also examine the output of the computation on the same terminal. Hence, the response time needed to be drastically reduced. This was achieved by storing programs of several users in memory and providing each user a slice of time on CPU to process his/her program. Distributed Systems A recent trend in computer system is to distribute computation among several processors. In the loosely coupled systems the processors do not share memory or a clock. Instead, each processor has its own local memory. The processors communicate with one another using communication network. The processors in a distributed system may vary in size and function, and referred by a number of different names, such as sites, nodes, computers and so on depending on the context. The major reasons for building distributed systems are:
  • 11. Resource sharing: If a number of different sites are connected to one another, then a user at one site may be able to use the resources available at the other. Computation speed up: If a particular computation can be partitioned into a number of sub computations that can run concurrently, then a distributed system may allow a user to distribute computation among the various sites to run them concurrently. Reliability: If one site fails in a distributed system, the remaining sites can potentially continue operations. Communication: There are many instances in which programs need to exchange data with one another. Distributed data base system is an example of this. Real-time Operating System The advent of timesharing provided good response times to computer users. However, timesharing could not satisfy the requirements of some applications. Real-time (RT) operating systems were developed to meet the response requirements of such applications. There are two flavors of real-time systems. A hard real-time system guarantees that critical tasks complete at a specified time. A less restrictive type of real time system is soft real-time system, where a critical real-time task gets priority over other tasks, and retains that priority until it completes. The several areas in which this type is useful are multimedia, virtual reality, and advance scientific projects such as undersea exploration and planetary rovers. Because of the expanded uses for soft real-time functionality, it is finding its way into most current operating systems, including major versions of Unix and Windows NT OS. A real-time operating system is one, which helps to fulfill the worst-case response time requirements of an application. An RT OS provides the following facilities for this purpose: 1. Multitasking within an application. 2. Ability to define the priorities of tasks. 3. Priority driven or deadline oriented scheduling. 4. Programmer defined interrupts. A task is a sub-computation in an application program, which can be executed concurrently with other sub-computations in the program, except at specific places in its execution called synchronization points. Multi-tasking, which permits the existence of many tasks within the application program, provides the possibility of overlapping the CPU and I/O activities of the application with one another. This helps in reducing its
  • 12. elapsed time. The ability to specify priorities for the tasks provides additional controls to a designer while structuring an application to meet its response-time requirements. Real time operating systems (RTOS) are specifically designed to respond to events that happen in real time. This can include computer systems that run factory floors, computer systems for emergency room or intensive care unit equipment (or even the entire ICU), computer systems for air traffic control, or embedded systems. RTOSs are grouped according to the response time that is acceptable (seconds, milliseconds, microseconds) and according to whether or not they involve systems where failure can result in loss of life. Examples of real-time operating systems include QNX, Jaluna-1, ChorusOS, LynxOS, Windows CE .NET, and VxWorks AE, etc. Self assessment questions 1. What do the terms program, process, and thread mean? 2. What is the purpose of a compiler, assembler and linker? 3. What is the structure of a code file? What is the purpose of the symbol table in a code file? 4. Why are shared libraries essential on modern computers? Operating System Components Even though, not all systems have the same structure many modern operating systems share the same goal of supporting the following types of system components. Process Management The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Each of these activities is encapsulated in a process. A process includes the complete execution context (code, data, PC, registers, OS resources in use etc.). It is important to note that a process is not a program. A process is only ONE instant of a program in execution. There are many processes can be running the same program. The five major activities of an operating system in regard to process management are1. Creation and deletion of user and system processes. 2. Suspension and resumption of processes. 3. A mechanism for process synchronization. 4. A mechanism for process communication. 5. A mechanism for deadlock handling.
  • 13. Main-Memory Management Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be access directly by the CPU. That is to say for a program to be executed, it must in the main memory. The major activities of an operating in regard to memory-management are: 1. Keep track of which part of memory are currently being used and by whom. 2. Decide which processes are loaded into memory when memory space becomes available. 3. Allocate and de-allocate memory space as needed. File Management A file is a collection of related information defined by its creator. Computer can store files on the disk (secondary storage), which provides long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods. A file system normally organized into directories to ease their use. These directories may contain files and other directions. The five main major activities of an operating system in regard to file management are 1. The creation and deletion of files. 2. The creation and deletion of directions. 3. The support of primitives for manipulating files and directions. 4. The mapping of files onto secondary storage. 5. The back up of files on stable storage media. I/O System Management I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to whom it is assigned. Secondary-Storage Management Generally speaking, systems have several levels of storage, including primary storage, secondary storage and cache storage. Instructions and data must be placed in primary storage or cache to be referenced by a running program. Because main memory is too small to accommodate all data and programs, and its data are lost when power is lost, the computer system must provide secondary storage to back up main memory. Secondary storage consists of tapes, disks, and other media designed to hold information that will
  • 14. eventually be accessed in primary storage (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space. The three major activities of an operating system in regard to secondary storage management are: 1. Managing the free space available on the secondary-storage device. 2. Allocation of storage space when new files have to be written. 3. Scheduling the requests for memory access. Networking A distributed system is a collection of processors that do not share memory, peripheral devices, or a clock. The processors communicate with one another through communication lines called network. The communication-network design must consider routing and connection strategies, and the problems of contention and security. Protection System If a computer system has multiple users and allows the concurrent execution of multiple processes, then various processes must be protected from one another’s activities. Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by a computer system. Command Interpreter System A command interpreter is an interface of the operating system with the user. The user gives commands with are executed by operating system (usually by turning them into system calls). The main function of a command interpreter is to get and execute the next user specified command. Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be supported by an operating system, and they do not really need to run in kernel mode. There are two main advantages of separating the command interpreter from the kernel. 1. If we want to change the way the command interpreter looks, i.e., I want to change the interface of command interpreter, I am able to do that if the command interpreter is separate from the kernel. I cannot change the code of the kernel so I cannot modify the interface.
  • 15. 2. If the command interpreter is a part of the kernel, it is possible for a malicious process to gain access to certain part of the kernel that it should not have. To avoid this scenario it is advantageous to have the command interpreter separate from kernel. Self Assessment Questions 1. Discuss the various components of OS? 2. Explain the Memory Management and File Management in brief. 3. Write Note on. 1. Secondary-Storage Management 2. Command Interpreter System Operating System Services Following are the five services provided by operating systems for the convenience of the users. Program Execution The purpose of a computer system is to allow the user to execute programs. So the operating system provides an environment where the user can conveniently run programs. The user does not have to worry about the memory allocation or multitasking or anything. These things are taken care of by the operating systems. Running a program involves the allocating and de-allocating memory, CPU scheduling in case of multi-process. These functions cannot be given to the user-level programs. So user-level programs cannot help the user to run programs independently without the help from operating systems. I/O Operations Each program requires an input and produces output. This involves the use of I/O. The operating systems hides from the user the details of underlying hardware for the I/O. All the users see that the I/O has been performed without any details. So the operating system, by providing I/O, makes it convenient for the users to run programs. For efficiently and protection users cannot control I/O so this service cannot be provided by user-level programs. File System Manipulation
  • 16. The output of a program may need to be written into new files or input taken from some files. The operating system provides this service. The user does not have to worry about secondary storage management. User gives a command for reading or writing to a file and sees his/her task accomplished. Thus operating system makes it easier for user programs to accomplish their task. This service involves secondary storage management. The speed of I/O that depends on secondary storage management is critical to the speed of many programs and hence I think it is best relegated to the operating systems to manage it than giving individual users the control of it. It is not difficult for the user-level programs to provide these services but for above mentioned reasons it is best if this service is left with operating system. Communications There are instances where processes need to communicate with each other to exchange information. It may be between processes running on the same computer or running on the different computers. By providing this service the operating system relieves the user from the worry of passing messages between processes. In case where the messages need to be passed to processes on the other computers through a network, it can be done by the user programs. The user program may be customized to the specifications of the hardware through which the message transits and provides the service interface to the operating system. Error Detection An error in one part of the system may cause malfunctioning of the complete system. To avoid such a situation the operating system constantly monitors the system for detecting the errors. This relieves the user from the worry of errors propagating to various part of the system and causing malfunctioning. This service cannot be allowed to be handled by user programs because it involves monitoring and in cases altering area of memory or de-allocation of memory for a faulty process, or may be relinquishing the CPU of a process that goes into an infinite loop. These tasks are too critical to be handed over to the user programs. A user program if given these privileges can interfere with the correct (normal) operation of the operating systems. Self Assessment Questions
  • 17. 1. Explain the five services provided by the operating system. Operating Systems for Different Computers Operating systems can be grouped according to functionality: operating systems for Supercomputers, Computer Clusters, Mainframes, Servers, Workstations, Desktops, Handheld Devices, Real Time Systems, or Embedded Systems. OS for Supercomputers: Supercomputers are the fastest computers, very expensive and are employed for specialized applications that require immense amounts of mathematical calculations, for example, weather forecasting, animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum exploration. Out of many operating systems used for supercomputing UNIX and Linux are the most dominant ones. Computer Clusters Operating Systems: A computer cluster is a group of computers that work together closely so that in many respects they can be viewed as though they are a single computer. The components of a cluster are commonly, connected to each other through fast local area networks. Besides many open source operating systems, and two versions of Windows 2003 Server, Linux is popularly used for Computer clusters. Mainframe Operating Systems: Mainframes used to be the primary form of computer. Mainframes are large centralized computers and at one time they provided the bulk of business computing through time sharing. Mainframes are still useful for some large scale tasks, such as centralized billing systems, inventory systems, database operations, etc. Minicomputers were smaller, less expensive versions of mainframes for businesses that couldn’t afford true mainframes. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. Besides various versions of operating systems by IBM for its early System/360, to newest Z series operating system z/OS, Unix and Linux are also used as mainframe operating systems. Servers Operating Systems: Servers are computers or groups of computers that provides services to other computers, connected via network. Based on the requirements, there are various versions of server operating systems from different vendors, starting with Microsoft’s Servers from Windows NT to Windows 2003, OS/2 servers, UNIX servers, Mac OS servers, and various flavors of Linux.
  • 18. Workstation Operating Systems: Workstations are more powerful versions of personal computers. Like desktop computers, often only one person uses a particular workstation, and run a more powerful version of a desktop operating system. Most of the times workstations are used as clients in a network environment. The popular workstation operating systems are Windows NT Workstation, Windows 2000 Professional, OS/2 Clients, Mac OS, UNIX, Linux, etc Desktop Operating Systems: A personal computer (PC) is a microcomputer whose price, size, and capabilities make it useful for individuals, also known as Desktop computers or home computers Desktop operating systems are used for personal computers, for example DOS, Windows 9x, Windows XP, Macintosh OS, Linux, etc. Embedded Operating Systems: Embedded systems are combinations of processors and special software that are inside of another device, such as the electronic ignition system on cars. Examples of embedded operating systems are Embedded Linux, Windows CE, Windows XP Embedded, Free DOS, Free RTOS, etc. Operating Systems for Handheld Computers: Handheld operating systems are much smaller and less capable than desktop operating systems, so that they can fit into the limited memory of handheld devices. The operating systems include Palm OS, Windows CE, EPOC, and Summary An operating system (OS) is a program that controls the execution of an application program and acts as an interface between the user and computer hardware. The objectives of operating system are convenience, efficiency, and ability to evolve. Besides this the operating system performs function such as hiding details of the hardware, resource management, and providing effective user interface. The process management component of operating system is responsible for creation, termination, other and state transitions of a process. The memory management unit is mainly responsible for allocation, de-allocation to processes, and keeping track records of memory usage by different processes. The operating system services are program execution, I/O operations, file system manipulation, communication and error detection. Terminal Questions
  • 19. 1. What is an operating system? 2. What are the objectives of an operating system? 3. Describe in brief, the function of an operating system. 4. Explain the evolution of operating system in brief. 5. Write a note on Batch OS. Discuss how it is differ from Multi Programmed Batch Systems. 6. What is difference between multi-programming and timesharing operating systems? 7. What are the typical features of an operating system provides? 8. Explain the functions of operating system as file manager. 9. What are different services provided by an operating system? 10. Write Note on : 1.Mainframe Operating Systems 2.Embedded Operating Systems 3.Servers Operating Systems 4.Desktop Operating Systems many Linux versions such as Qt Palmtop, and Pocket Linux, etc. Unit 2: Operating System Architecture : This unit deals with the Simple structure, extended machine, layered approaches. It covers the different methodology for OS design (Models). It covers the Introduction of Virtual Machine, Virtual environment and Machine aggregation. And also describes the implementation techniques. Introduction A system as large and complex as a modern operating system must be engineered carefully if it is to function properly and be modified easily. A common approach is to partition the task into small component rather than have one monolithic system. Each of these modules should be a well-defined portion of the system, with carefully defined inputs, outputs, and functions. In this unit, we discuss how various components of an operating system are interconnected and melded into a kernel. Objective: At the end of this unit, readers would be able to understand:
  • 20. What is Kernel? Monolithic Kernel Architecture • Layered Architecture • Microkernel Architecture • Operating System Components • Operating System Services OS as an Extended Machine We can think of an operating system as an Extended Machine standing between our programs and the bare hardware. As shown in above figure 2.1, the operating system interacts with the hardware hiding it from the application program, and user. Thus it acts as interface between user programs and hardware. Self Assessment Questions 1. What is the role of an Operating System? Simple Structure Many commercial systems do not have well-defined structures. Frequently, such operating systems started as small, simple, and limited systems and then grew beyond their original scope. MS-DOS is an example of such a system. It was originally designed and implemented by a few people who had no idea that it would become so popular. It was written to provide the most functionality in the least space, so it was not divided into
  • 21. modules carefully. In MS-DOS, the interfaces and levels of functionality are not well separated. For instance, application programs are able to access the basic I/O routines to write directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to errant (or malicious) programs, causing entire system crashes when user programs fail. Of course, MS-DOS was also limited by the hardware of its era. Because the Intel 8088 for which it was written provides no dual mode and no hardware protection, the designers of MS-DOS had no choice but to leave the base hardware accessible. Another example of limited structuring is the original UNIX operating system. UNIX is another system that initially was limited by hardware functionality. It consists of two separable parts: • the kernel and • the system programs The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX has evolved. We can view the traditional UNIX operating system as being layered. Everything below the system call interface and above the physical hardware is the kernel. The kernel provides the file system, CPU scheduling, memory management, and other operating-system functions through system calls. Taken in sum, that is an enormous amount of functionality to be combined into one level. This monolithic structure was difficult to implement and maintain. Self Assessment Questions 1. ”In MS-DOS, the interfaces and levels of functionality are not well separated”. Comment on this. 2. What are the components of a Unix Operating System? Layered Approach With proper hardware support, operating systems can be broken into pieces that are smaller and more appropriate than those allowed by the original MS-DOS or UNIX systems. The operating system can then retain much greater control over the computer and over the applications that make use of that computer. Implementers have more freedom in changing the inner workings of the system and in creating modular operating systems. Under the top-down approach, the overall functionality and features are determined and the separated into components. Information hiding is also important, because it leaves programmers free to implement the low-level routines as they see fit, provided that the external interface of the routine stays unchanged and that the routine itself performs the advertised task.
  • 22. A system can be made modular in many ways. One method is the layered approach, in which the operating system is broken up into a number of layers (levels). The bottom layer (layer 0) id the hardware; the highest (layer N) is the user interface. Users File Systems Inter-process Communication I/O and Device Management Virtual Memory Primitive Process Management Hardware Fig. 2.2: Layered Architecture An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating – system layer-say, layer M-consists of data structures and a set of routines that can be invoked by higher- level layers. Layer M, in turn, can invoke operations on lower-level layers. The main advantage of the layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system is simplified. Each layer is implemented with only those operations provided by lower-level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers. The major difficulty with the layered approach involves appropriately defining the various layers. Because layer can use only lower-level layers, careful planning is necessary. For example, the device driver for the backing store (disk space used by virtual-memory algorithms) must be at a lower level than the memory-management routines, because memory management requires the ability to use the backing store. Other requirement may not be so obvious. The backing-store driver would normally be above the CPU scheduler, because the driver may need to wait for I/O and the CPU can be rescheduled during this time. However, on a larger system, the CPU scheduler may have more information about all the active processes than can fit in memory. Therefore,
  • 23. this information may need to be swapped in and out of memory, requiring the backing- store driver routine to be below the CPU scheduler. A final problem with layered implementations is that they tend to be less efficient than other types. For instance, when a user program executes an I/O operation, it executes a system call that is trapped to the I/O layer, which calls the memory-management layer, which in turn calls the CPU-scheduling layer, which is then passed to the hardware. At each layer, the parameters may be modified; data may need to be passed, and so on. Each layer adds overhead to the system call; the net result is a system call that takes longer than does one on a non-layered system. These limitations have caused a small backlash against layering in recent years. Fewer layers with more functionality are being designed, providing most of the advantages of modularized code while avoiding the difficult problems of layer definition and interaction. Self Assessment Questions 1. What is the layered Architecture of UNIX? 2. What are the advantages of layered Architecture? Micro-kernels We have already seen that as UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researches at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing then as system and user-level programs. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, micro-kernels provide minimal process and memory management, in addition to a communication facility. Device File Server Client Process Virtual Memory Drivers ….
  • 24. Microkernel Hardware Fig. 2.3: Microkernel Architecture The main function of the microkernel is to provide a communication facility between the client program and the various services that are also running in user space. Communication is provided by message passing. For example, if the client program and service never interact directly. Rather, they communicate indirectly by exchanging messages with the microkernel. On benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provided more security and reliability, since most services are running as user – rather than kernel – processes, if a service fails the rest of the operating system remains untouched. Several contemporary operating systems have used the microkernel approach. Tru64 UNIX (formerly Digital UNIX provides a UNIX interface to the user, but it is implemented with a March kernel. The March kernel maps UNIX system calls into messages to the appropriate user-level services. The following figure shows the UNIX operating system architecture. At the center is hardware, covered by kernel. Above that are the UNIX utilities, and command interface, such as shell (sh), etc.
  • 25. SelAssessment Questions 1. What other facilities Micro-kernel provides in addition to Communication facility? 2. What are the benefits of Micro-kernel? UNIX kernel Components The UNIX kernel has components as depicted in the figure 2.5 bellow. The figure is divided in to three modes: user mode, kernel mode, and hardware. The user mode contains user programs which can access the services of the kernel components using system call interface. The kernel mode has four major components: system calls, file subsystem, process control subsystem, and hardware control. The system calls are interface between user programs and file and process control subsystems. The file subsystem is responsible for file and I/O management through device drivers.
  • 26. The process control subsystem contains scheduler, Inter-process communication and memory management. Finally the hardware control is the interface between these two subsystems and hardware. Fig. 2.5: Unix kernel components Another example is QNX. QNX is a real-time operating system that is also based on the microkernel design. The QNX microkernel provides services for message passing and process scheduling. It also handled low-level network communication and hardware interrupts. All other services in QNX are provided by standard processes that run outside the kernel in user mode. Unfortunately, microkernels can suffer from performance decreases due to increased system function overhead. Consider the history of Windows NT. The first release had a layered microkernels organization. However, this version delivered low performance compared with that of Windows 95. Windows NT 4.0 partially redressed the performance problem by moving layers from user space to kernel space and integrating them more closely. By the time Windows XP was designed, its architecture was more monolithic than microkernel. Self Assessment Questions 1. What are the components of UNIX Kernel? 2. Under what circumstances a Micro-kernel may suffer from performance decrease?
  • 27. Modules Perhaps the best current methodology for operating-system design involves using object- oriented programming techniques to create a modular kernel. Here, the kernel has a set of core components and dynamically links in additional services either during boot time or during run time. Such a strategy uses dynamically loadable modules and is common in modern implementations of UNIX, such as Solaris, Linux and MacOSX. For example, the Solaris operating system structure is organized around a core kernel with seven types of loadable kernel modules: 1. Scheduling classes 2. File systems 3. Loadable system calls 4. Executable formats 5. STREAMS formats 6. Miscellaneous 7. Device and bus drivers Such a design allow the kernel to provide core services yet also allows certain features to be implemented dynamically. For example device and bus drivers for specific hardware can be added to the kernel, and support for different file systems can be added as loadable modules. The overall result resembles a layered system in that each kernel section has defined, protected interfaces; but it is more flexible than a layered system in that any module can call any other module. Furthermore, the approach is like the microkernel approach in that the primary module has only core functions and knowledge of how to load and communicate with other modules; but it is more efficient, because modules do not need to invoke message passing in order to communicate. Self Assessment Questions 1. Which strategy uses dynamically loadable modules and is common in modern implementations of UNIX? 2. What are different loadable modules based on which the Solaris operating system structure is organized around a core kernel? Introduction to Virtual Machine The layered approach of operating systems is taken to its logical conclusion in the concept of virtual machine. The fundamental idea behind a virtual machine is to abstract the hardware of a single computer (the CPU, Memory, Disk drives, Network Interface Cards, and so forth) into several different execution environments and thereby creating the illusion that each separate execution environment is running its own private
  • 28. computer. By using CPU Scheduling and Virtual Memory techniques, an operating system can create the illusion that a process has its own processor with its own (virtual) memory. Normally a process has additional features, such as system calls and a file system, which are not provided by the hardware. The Virtual machine approach does not provide any such additional functionality but rather an interface that is identical to the underlying bare hardware. Each process is provided with a (virtual) copy of the underlying computer. Hardware Virtual machine The original meaning of virtual machine, sometimes called a hardware virtual machine, is that of a number of discrete identical execution environments on a single computer, each of which runs an operating system (OS). This can allow applications written for one OS to be executed on a machine which runs a different OS, or provide execution “sandboxes” which provide a greater level of isolation between processes than is achieved when running multiple processes on the same instance of an OS. One use is to provide multiple users the illusion of having an entire computer, one that is their “private” machine, isolated from other users, all on a single physical machine. Another advantage is that booting and restarting a virtual machine can be much faster than with a physical machine, since it may be possible to skip tasks such as hardware initialization. Such software is now often referred to with the terms virtualization and virtual servers. The host software which provides this capability is often referred to as a virtual machine monitor or hypervisor. Software virtualization can be done in three major ways:· Emulation, full system simulation, or “full virtualization with dynamic recompilation” — the virtual machine simulates the complete hardware, allowing an unmodified OS for a completely different CPU to be run.· Paravirtualization — the virtual machine does not simulate hardware but instead offers a special API that requires OS modifications. An example of this is XenSource’s XenEnterprise (www.xensource.com)· Native virtualization and “full virtualization” — the virtual machine only partially simulates enough hardware to allow an unmodified OS to be run in isolation, but the guest OS must be designed for the same type of CPU. The term native virtualization is also sometimes used to designate that hardware assistance through Virtualization Technology is used. Application virtual machine Another meaning of virtual machine is a piece of computer software that isolates the application being used by the user from the computer. Because versions of the virtual
  • 29. machine are written for various computer platforms, any application written for the virtual machine can be operated on any of the platforms, instead of having to produce separate versions of the application for each computer and operating system. The application is run on the computer using an interpreter or Just In Time compilation. One of the best known examples of an application virtual machine is Sun Microsystem’s Java Virtual Machine. Self Assessment Questions 1. What do you mean by a Virtual Machine? 2. Differentiate Hardware Virtual Machines and Software Virtual Machines. Virtual Environment A virtual environment (otherwise referred to as Virtual private server) is another kind of a virtual machine. In fact, it is a virtualized environment for running user-level programs (i.e. not the operating system kernel and drivers, but applications). Virtual environments are created using the software implementing operating system-level virtualization approach, such as Virtuozzo, FreeBSD Jails, Linux-VServer, Solaris Containers, chroot jail and OpenVZ. Machine Aggregation A less common use of the term is to refer to a computer cluster consisting of many computers that have been aggregated together as a larger and more powerful “virtual” machine. In this case, the software allows a single environment to be created spanning multiple computers, so that the end user appears to be using only one computer rather than several. PVM (Parallel Virtual Machine) and MPI (Message Passing Interface) are two common software packages that permit a heterogeneous collection of networked UNIX and/or Windows computers to be used as a single, large, parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers than with a traditional supercomputer. The Plan9 Operating System from Bell Labs uses this approach. Boston Circuits had released the gCore (grid-on-chip) Central Processing Unit (CPU) with 16 ARC 750D cores and a Time-machine hardware module to provide a virtual machine that uses this approach. Self Assessment Questions 1. What is Virtual Environment?
  • 30. 2. Explain Machine Aggregation. Implementation Techniques Emulation of the underlying raw hardware (native execution) This approach is described as full virtualization of the hardware, and can be implemented using a Type 1 or Type 2 hypervisor. (A Type 1 hypervisor runs directly on the hardware; a Type 2 hypervisor runs on another operating system, such as Linux.) Each virtual machine can run any operating system supported by the underlying hardware. Users can thus run two or more different “guest” operating systems simultaneously, in separate “private” virtual computers. The pioneer system using this concept was IBM’s CP-40, the first (1967) version of IBM’s CP/CMS (1967-1972) and the precursor to IBM’s VM family (1972-present). With the VM architecture, most users run a relatively simple interactive computing single-user operating system, CMS, as a “guest” on top of the VM control program (VM- CP). This approach kept the CMS design simple, as if it were running alone; the control program quietly provides multitasking and resource management services “behind the scenes”. In addition to CMS, VM users can run any of the other IBM operating systems, such as MVS or z/OS. z/VM is the current version of VM, and is used to support hundreds or thousands of virtual machines on a given mainframe. Some installations use Linux for zSeries to run Web servers, where Linux runs as the operating system within many virtual machines. Full virtualization is particularly helpful in operating system development, when experimental new code can be run at the same time as older, more stable, versions, each in separate virtual machines. (The process can even be recursive: IBM debugged new versions of its virtual machine operating system, VM, in a virtual machine running under an older version of VM, and even used this technique to simulate new hardware.) The x86 processor architecture as used in modern PCs does not actually meet the Popek and Goldberg virtualization requirements. Notably, there is no execution mode where all sensitive machine instructions always trap, which would allow per-instruction virtualization. Despite these limitations, several software packages have managed to provide virtualization on the x86 architecture, even though dynamic recompilation of privileged code, as first implemented by VMware, incurs some performance overhead as compared to a VM running on a natively virtualizable architecture such as the IBM System/370 or Motorola MC68020. By now, several other software packages such as Virtual PC, VirtualBox, Parallels Workstation and Virtual Iron manage to implement virtualization on x86 hardware.
  • 31. On the other hand, plex86 can run only Linux under Linux using a specific patched kernel. It does not emulate a processor, but uses bochs for emulation of motherboard devices. Intel and AMD have introduced features to their x86 processors to enable virtualization in hardware. Emulation of a non-native system Virtual machines can also perform the role of an emulator, allowing software applications and operating systems written for computer processor architecture to be run. Some virtual machines emulate hardware that only exists as a detailed specification. For example: • One of the first was the p-code machine specification, which allowed programmers to write Pascal programs that would run on any computer running virtual machine software that correctly implemented the specification. • The specification of the Java virtual machine. • The Common Language Infrastructure virtual machine at the heart of the Microsoft .NET initiative. • Open Firmware allows plug-in hardware to include boot-time diagnostics, configuration code, and device drivers that will run on any kind of CPU. This technique allows diverse computers to run any software written to that specification; only the virtual machine software itself must be written separately for each type of computer on which it runs. Self Assessment Questions 1. What are the techniques to realize Virtual Machines concept? 2. What are the advantages of Virtual Machines? Operating system-level virtualization Operating System-level Virtualization is a server virtualization technology which virtualizes servers on an operating system (kernel) layer. It can be thought of as partitioning: a single physical server is sliced into multiple small partitions (otherwise called virtual environments (VE), virtual private servers (VPS), guests, zones etc); each such partition looks and feels like a real server, from the point of view of its users. The operating system level architecture has low overhead that helps to maximize efficient use of server resources. The virtualization introduces only a negligible overhead and allows running hundreds of virtual private servers on a single physical server. In contrast,
  • 32. approaches such as virtualisation (like VMware) and paravirtualization (like Xen or UML) cannot achieve such level of density, due to overhead of running multiple kernels. From the other side, operating system-level virtualization does not allow running different operating systems (i.e. different kernels), although different libraries, distributions etc. are possible Self Assessment Questions 1. Describe the Operating System Level Virtualization. Summary The virtual machine concept has several advantages. In this environment, there is complete protection of the various system resources. Each virtual machine is completely isolated from all other virtual machines, so there are no protection problems. At the same time, however, there is no direct sharing of resources. Two approaches to provide sharing have been implemented. A virtual machine is a perfect vehicle for operating systems research and development. Operating system as extended machine acts as interface between hardware and user application programs. The kernel is the essential center of a computer operating system, i.e. the core that provides basic services for all other parts of the operating system. It includes interrupts handler, scheduler, operating system address space manager, etc. In the layered type architecture of operating systems, the components of kernel are built as layers on one another, and each layer can interact with its neighbor through interface. Whereas in micro-kernel architecture, most of these components are not part of kernel but acts as another layer to the kernel, and the kernel comprises of essential and basic components. Terminal Questions 1. Explain operating system as extended machine. 2. What is a kernel? What are the main components of a kernel? 3. Explain monolithic type of kernel architecture in brief. 4. What is a micro-kernel? Describe its architecture. 5. Compare micro-kernel with layered architecture of operating system. 6. Describe UNIX kernel components in brief. 7. What are the components of operating system? 8. Explain the responsibilities of operating system as process management. 9. Explain the function of operating system as file management. 10. What are different services provided by an operating system?
  • 33. Unit 3: Process Management : This unit covers the process management and threads. Brief about the process creation, termination, process state and process control. Discussed about the process Vs Threads, Types of threads etc. Introduction This unit discuss the definition of process, process creation, process termination, process state, and process control. And also deals with the threads and thread types. A process can be simply defined as a program in execution. Process along with program code, comprises of program counter value, Processor register contents, values of variables, stack and program data. A process is created and terminated, and it follows some or all of the states of process transition; such as New, Ready, Running, Waiting, and Exit. A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. There are two types of threads: user level threads (ULT) and kernel level threads (KLT), user level threads are mostly used on the systems where the operating system does not support threads, but also can be combined with the kernel level threads. Threads also have similar properties like processes e.g. execution states, context switch etc. Objectives : At the end of this unit, you will be able to understand the : What is a Process? Process Creation , Process Termination, Process States, Process Control Threads Types of Threads
  • 34. What is a Process? The notion of process is central to the understanding of operating systems. The term process is used somewhat interchangeably with ‘task’ or ‘job’. There are quite a few definitions presented in the literature, for instance A program in Execution. An asynchronous activity. The entity to which processors are assigned. The ‘dispatchable’ unit. And many more, but the definition “Program in Execution” seem to be most frequently used. And this is a concept we will use in the present study of operating systems. Now that we agreed upon the definition of process, the question is, what is the relation between process and program, or is it same with different name or when the process is sleeping (not executing) it is called program and when it is executing becomes process. Well, to be very precise. Process is not the same as program. A process is more than a program code. A process is an ‘active’ entity as oppose to program which considered being a ‘passive’ entity. As we all know that a program is an algorithm expressed in some programming language. Being a passive, a program is only a part of process. Process, on the other hand, includes: Current value of Program Counter (PC) Contents of the processors registers Value of the variables The process stack, which typically contains temporary data such as subroutine parameter, return address, and temporary variables. A data section that contains global variables. A process is the unit of work in a system. In Process model, all software on the computer is organized into a number of sequential processes. A process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In reality, the CPU switches back and forth among processes. Process Creation In general-purpose systems, some way is needed to create processes as needed during operation. There are four principal events led to processes creation. System initialization. Execution of a process Creation System call by a running process. A user request to create a new process. Initialization of a batch job.
  • 35. Foreground processes interact with users. Background processes that stay in background sleeping but suddenly springing to life to handle activity such as email, webpage, printing, and so on. Background processes are called daemons. This call creates an exact clone of the calling process. A process may create a new process by executing system call ‘fork’ in UNIX. Creating process is called parent process and the created one is called the child processes. Only one parent is needed to create a child process. This creation of process (processes) yields a hierarchical structure of processes. Note that each child has only one parent but each parent may have many children. After the fork, the two processes, the parent and the child, initially have the same memory image, the same environment strings and the same open files. After a process is created, both the parent and child have their own distinct address space. Following are some reasons for creation of a process 1. User logs on. 2. User starts a program. 3. Operating systems creates process to provide service, e.g., to manage printer. 4. Some program starts another process. Creation of a process involves following steps: 1. Assign a unique process identifier to the new process, followed by making new entry in to the process table regarding this process. 2. Allocate space for the process: this operating involves finding how much space is needed by the process and allocating space to the parts of the process such as user program, user data, stack and process attributes. The requirement of the space can be taken by default based on the type of the process, or from the parent process if the process is spawned by another process. 3. Initialize Process Control Block: the PCB contains various attributes required to execute and control a process, such as process identification, processor status information and control information. This can be initialized to standard default values plus attributes that have been requested for this process. 4. Set the appropriate linkages: the operating system maintains various queues related to a process in the form of linked lists, the newly created process should be attached to one of such queues.
  • 36. 5. Create or expand other data structures: depending on the implementation, an operating system may need to create some data structures for this process, for example to maintain accounting file for billing or performance assessment. Process Termination A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB’s memory space is returned to a free memory pool. The new process terminates the existing process, usually due to following reasons: • Normal Exit Most processes terminates because they have done their job. This call is exit in UNIX. • Error Exit When process discovers a fatal error. For example, a user tries to compile a program that does not exist. • Fatal Error An error caused by process due to a bug in program for example, executing an illegal instruction, referring non-existing memory or dividing by zero. • Killed by another Process A process executes a system call telling the Operating Systems to terminate some other process. Process States A process goes through a series of discrete process states during its lifetime. Depending on the implementation, the operating systems may differ in the number of states a process goes though. Though there are various state models starting from two states to nine states, we will first see a five states model and then seven states model, as lower states models are now obsolete. Five State Process Model Following are the states of a five state process model. The figure 3.1 show these state transition. • New State The process being created. • Terminated State The process has finished execution.
  • 37. Blocked (waiting) State When a process blocks, it does so because logically it cannot continue, typically because it is waiting for input that is not yet available. Formally, a process is said to be blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed. In this state a process is unable to run until some external event happens. • Running State A process is said to be running if it currently has the CPU, which is, actually using the CPU at that particular instant. • Ready State A process is said to be ready if it use a CPU if one were available. It is run-able but temporarily stopped to let another process run. Logically, the ‘Running’ and ‘Ready’ states are similar. In both cases the process is willing to run, only in the case of ‘Ready’ state, there is temporarily no CPU available for it. The ‘Blocked’ state is different from the ‘Running’ and ‘Ready’ states in that the process cannot run, even if the CPU is available. Following are six possible transitions among above mentioned five states Transition 1 occurs when process discovers that it cannot continue. If running process initiates an I/O operation before its allotted time expires, the running process voluntarily relinquishes the CPU. This state transition is: Block (process): Running → Blocked. Transition 2 occurs when the scheduler decides that the running process has run long enough and it is time to let another process have CPU time. This state transition is:
  • 38. Time-Run-Out (process): Running → Ready. Transition 3 occurs when all other processes have had their share and it is time for the first process to run again This state transition is: Dispatch (process): Ready → Running. Transition 4 occurs when the external event for which a process was waiting (such as arrival of input) happens. This state transition is: Wakeup (process): Blocked → Ready. Transition 5 occurs when the process is created. This state transition is: Admitted (process): New → Ready. Transition 6 occurs when the process has finished execution. This state transition is: Exit (process): Running → Terminated. Swapping Many of the operating systems follow the above shown process model. However the operating systems which does not employ virtual memory, the processor will be idle most of the times considering the difference between speed of I/O and processor. There will be many processes waiting for I/O in the memory, and exhausting the memory. If there is no ready process to run; new processes can not be created as there is no memory available to accommodate new process. Thus the processor has to wait till any of the waiting processes become ready after completion of an I/O operation. This problem can be solved by adding to more states in the above process model by using swapping technique. Swapping involves moving part or all of a process from main memory to disk. When none of the processes in main memory is in the ready state, the operating system swaps one of the blocked processes out onto disk in to a suspend queue. This is a queue of existing processes that have been temporarily shifted out of main memory, or suspended. The operating system then either creates new process or brings a swapped process from the disk which has become ready.
  • 39. Seven State Process Model The following figure 3.2 shows the seven state process model in which uses above described swapping technique. Apart from the transitions we have seen in five states model, following are the new transitions which occur in the above seven state model. • Blocked to Blocked / Suspend: If there are now ready processes in the main memory, at least one blocked process is swapped out to make room for another process that is not blocked. • Blocked / Suspend to Blocked: If a process is terminated making space in the main memory, and if there is any high priority process which is blocked but suspended, anticipating that it will become free very soon, the process is brought in to the main memory. • Blocked / Suspend to Ready / Suspend: A process is moved from Blocked / Suspend to Ready / Suspend, if the event occurs on which the process was waiting, as there is no space in the main memory. • Ready / Suspend to Ready: If there are no ready processes in the main memory, operating system has to bring one in main memory to continue the execution. Some times this transition takes place even there are ready processes in main memory but having lower priority than one of the processes in Ready / Suspend state. So the high priority process is brought in the main memory. • Ready to Ready / Suspend: Normally the blocked processes are suspended by the operating system but sometimes to make large block free, a ready process may be suspended. In this case normally the low priority processes are suspended.
  • 40. New to Ready / Suspend: When a new process is created, it should be added to the Ready state. But some times sufficient memory may not be available to allocate to the newly created process. In this case, the new process is sifted to Ready / Suspend. Process Control In this section we will study structure of a process, process control block, modes of process execution, and process switching. Process Structure After studying the process states now we will see where does the process reside, and what is the physical manifestation of a process? The location of the process depends on memory management scheme being used. In the simplest case, a process is maintained in the secondary memory, and to manage this process, at least small part of this process is maintained in the main memory. To execute the process, the entire process or part of it is brought in the main memory, and for that the operating system need to know the location of the process. Process identification Processor state information Process control information User Stack Private user address space (program, data) Shared address space Figure 3.3: Process Image The obvious contents of a process are User Program to be executed, and the User Data which is associated with that program. Apart from these there are two major parts of a process; System Stack, which is used to store parameters and calling addresses for procedure and system calls, and Process Control Block, this is nothing but collection of process attributes needed by operating system to control a process. The collection of user program, data, system stack, and process control block is called as Process Image as shown in the figure 3.3 above. Process Control Block
  • 41. A process control block as shown in the figure 3.4 bellow, contains various attributes required by operating system to control a process, such as process state, program counter, CPU state, CPU scheduling information, memory management information, I/O state information, etc. These attributes can be grouped in to three general categories as follows: Process identification Processor state information Process control information The first category stores information related to Process identification, such as identifier of the current process, identifier of the process which created this process, to maintain parent-child process relationship, and user identifier, the identifier of the user on behalf of who’s this process is being run. The Processor state information consists of the contents of the processor registers, such as user-visible registers, control and status registers which includes program counter and program status word, and stack pointers. The third category Process Control Identification is mainly required for the control of a process. The information includes: scheduling and state information, data structuring, inter-process communication, process privileges memory management, and resource ownership and utilization. pointer process state process number program counter registers memory limits list of open files . . . Figure 3.4: Process Control Block Modes of Execution
  • 42. In order to ensure the correct execution of each process, an operating system must protect each process’s private information (executable code, data, and stack) from uncontrolled interferences from other processes. This is accomplished by suitably restricting the memory address space available to a process for reading/writing, so that the OS can regain CPU control through hardware-generated exceptions whenever a process violates those restrictions. Also the OS code needs to execute in a privileged condition with respect to “normal”: to manage processes, it needs to be enabled to execute operations which are forbidden to “normal” processes. Thus most of the processors support at least two modes of execution. Certain instructions can only be executed in the more privileged mode. These include reading or altering a control register such as program status word, primitive I/O instruction; and memory management instructions. The less privileged mode is referred as user mode as typically user programs are executed in this mode, and the more privileged mode in which important operating system functions are executed is called as kernel mode/ system mode or control mode. The current mode information is stored in the PSW, i.e. whether the processor is running in user mode or kernel mode. The mode change is normally done by executing change mode instruction; typically after a user process invokes a system call, or whenever an interrupt occurs, as these are operating system functions and needed to be executed in privileged mode. After the completion of system call or interrupt routine, the mode is again changed to user mode to continue the user process execution. Context Switching To give each process on a multiprogrammed machine a fair share of the CPU, a hardware clock generates interrupts periodically. This allows the operating system to schedule all processes in main memory (using scheduling algorithm) to run on the CPU at equal intervals. Each time a clock interrupt occurs, the interrupt handler checks how much time the current running process has used. If it has used up its entire time slice, then the CPU scheduling algorithm (in kernel) picks a different process to run. Each switch of the CPU from one process to another is called a context switch. A context is the contents of a CPU’s registers and program counter at any point in time. Context switching can be described as the kernel (i.e., the core of the operating system) performing the following activities with regard to processes on the CPU: (1) suspending the progression of one process and storing the CPU’s state (i.e., the context) for that process somewhere in memory, (2) retrieving the context of the next process from memory and restoring it in the CPU’s registers and (3) returning to the location indicated by the program counter (i.e., returning to the line of code at which the process was interrupted) in order to resume the process. The figure 3.5 bellow depicts the process of context switch from process P0 to process P1.
  • 43. Figure 3.5: Process switching Self Assessment Questions: 1. Discuss the process state with its five state process model. 2. Explain the seven state process model. 3. What is Process Control ? Discuss the process control block. 4. Write note on Context Switching. A context switch is sometimes described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. A context switch occurs due to interrupts, trap (error due to the current instruction) or a system call as described bellow: • Clock interrupt: when a process has executed its current time quantum which was allocated to it, the process must be switched from running state to ready state, and another process must be dispatched for execution. • I/O interrupt: whenever any I/O related event occurs, the OS is interrupted, the OS has to determine the reason of it and take necessary action for that event. Thus the current process is switched to ready state and the interrupt routine is loaded to do the action for the interrupt event (e.g. after an I/O interrupt the OS moves all the processes which were blocked on the event, from blocked state to ready state, and blocked/suspended to ready/suspended state). After completion of the interrupt related actions, it is expected that the process which was switched, should be brought for execution, but that does not happen. At this point the
  • 44. scheduler again decides which process is to be scheduled for execution from all the ready processes afresh. This is important as it will schedule any high priority process present in the ready queue added during the interrupt handling period. • Memory fault: when virtual memory technique is used for memory management, many a times it happens that a process refers to a memory address which is not present in the main memory, and needs to be brought in. As the memory block transfer takes time, another process should be given chance for execution and the current process should be blocked. Thus the OS blocks the current process, issues an I/O request to get the memory block in the memory and switches the current process to blocked state, and loads another process for execution. • Trap: if the instruction being executed has any error or exception, depending on the criticalness of the error / exception and design of operating system, it may either move the process to exit state, or may execute the current process after a possible recovery. System call: many a times a process has to invoke a system call for a privileged job, for this the current process is blocked and the respective operating system’s system call code is executed. Thus the context of the current process is switched to the system call code. Example: UNIX Process Let us see an example of UNIX System V, which makes use of a simple but powerful process facility that is highly visible to the user. The following figure shows the model followed by UNIX, in which most of the operating system executes within the environment of a user process. Thus, two modes, user and kernel, are required. UNIX uses two categories of processes: system processes and user processes. System processes run in kernel mode and execute operating system code to perform administrative and housekeeping functions, such as allocation of memory and process swapping. User processes operate in user mode to execute user programs and utilities and in kernel mode to execute instructions belong to the kernel. A user process enters kernel mode by issuing a system call, when an exception (fault) is generated or when an interrupt occurs.
  • 45. A total of nine process states are recognized by the UNIX operating system as explained bellow • User Running: Executing in user mode. • Kernel Running: Executing in kernel mode. • Ready to Run, in Memory: Ready to run as soon as the kernel schedules it. • Asleep in Memory: Unable to execute until an event occurs; process is in main memory (a blocked state). • Ready to Run, Swapped: Process is ready to run, but the swapper must swap the process into main memory before the kernel can schedule it to execute. • Sleeping, Swapped: The process is awaiting an event and has been swapped to secondary storage (a blocked state). • Preempted: Process is returning from kernel to user mode, but the kernel preempts it and does a process switch to schedule another process. • Created: Process is newly created and not yet ready to run.
  • 46. Zombie: Process no longer exists, but it leaves a record for its parent process to collect. UNIX employs two Running states to indicate whether the process is executing in user mode or kernel mode. A distinction is made between the two states: (Ready to Run, in Memory) and (Preempted). These are essentially the same state, as indicated by the dotted line joining them. The distinction is made to emphasize the way in which the preempted state is entered. When a process is running in kernel mode (as a result of a supervisor call, clock interrupt, or I/O interrupt), there will come a time when the kernel has completed its work and is ready to return control to the user program. At this point, the kernel may decide to preempt the current process in favor of one that is ready and of higher priority. In that case, the current process moves to the preempted state. However, for purposes of dispatching, those processes in the preempted state and those in the Ready to Run, in Memory state form one queue. Preemption can only occur when a process is about to move from kernel mode to user mode. While a process is running in kernel mode, it may not be preempted. This makes UNIX unsuitable for real-time processing. Two processes are unique in UNIX. Process 0 is a special process that is created when the system boots; in effect, it is predefined as a data structure loaded at boot time. It is the swapper process. In addition, process 0 spawns process 1, referred to as the init process; all other processes in the system have process 1 as an ancestor. When a new interactive user logs onto the system, it is process 1 that creates a user process for that user. Subsequently, the user process can create child processes in a branching tree, so that any particular application can consist of a number of related processes. Threads A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals. Processes Vs Threads