Distributed Objects and Remote Invocation: Communication between distributed objects
Remote procedure call, Events and notifications, operating system layer Protection, Processes
and threads, Operating system architecture. Introduction to Distributed shared memory,
Design and implementation issue of DSM.Case Study: CORBA and JAVA RMI.
2. Distributed System
Mr. Sagar Pandya
Information Technology Department
sagar.pandya@medicaps.ac.in
Course
Code
Course Name Hours Per
Week
Total Hrs. Total
Credits
L T P
IT3EL04 Distributed System 3 0 0 3 3
3. Reference Books
Text Book:
1. G. Coulouris, J. Dollimore and T. Kindberg, Distributed Systems: Concepts
and design, Pearson.
2. P K Sinha, Distributed Operating Systems: Concepts and design, PHI
Learning.
3. Sukumar Ghosh, Distributed Systems - An Algorithmic approach, Chapman
and Hall/CRC
Reference Books:
1. Tanenbaum and Steen, Distributed systems: Principles and Paradigms,
Pearson.
2. Sunita Mahajan & Shah, Distributed Computing, Oxford Press.
3. Distributed Algorithms by Nancy Lynch, Morgan Kaufmann.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
4. Unit-2
Distributed Objects and Remote Invocation:
Communication between distributed objects Remote procedure call,
Events and notifications,
Operating system layer Protection,
Processes and threads,
Operating system architecture.
Introduction to Distributed shared memory,
Design and implementation issue of DSM.
Case Study: CORBA and JAVA RMI.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
5. INTRODUCTION
WHAT IS A DISTRIBUTED OBJECT?
A distributed object is an object that can be accessed remotely.
This means that a distributed object can be used like a regular object,
but from anywhere on the network.
An object is typically considered to encapsulate data and behavior.
The location of the distributed object is not critical to the user of the
object.
A distributed object might provide its user with a set of related
capabilities. The application that provides a set of capabilities is
often referred to as a service.
A Business Object might be a local object or a distributed object. The
term business object refers to an object that performs a set of tasks
associated with a particular business process.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
6. INTRODUCTION
Distributed object model:
The term distributed objects usually refers to software modules that
are designed to work together, but reside either in multiple computers
connected via a network or in different processes inside the same
computer.
Distributed objects
The state of an object consists of the values of its instance variables
since object-based programs are logically partitioned, the physical
distribution of objects into different processes or computers in a
distributed system.
Distributed object systems may adopt the client server architecture.
Objects are managed by servers and their clients invoke their
methods using remote method invocation.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
7. INTRODUCTION
COMMUNICATION BETWEEN DISTRIBUTED OBJECTS:
Various middleware languages like RMI required to make successful
communication between distributed objects.
Stub and skeleton objects works as communication objects in
distributed system.
RMI means remote method invocation. Whenever needed RMI
invokes the methods at client and server side objects.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
8. INTRODUCTION
As shown in above diagram, in RMI communication follows the
following steps:
A stub is defined on client side (machine A).
Then the stub passes caller data over the network to the server
skeleton (machine B).
The skeleton then passes received data to the called object.
Skeleton waits for a response and returns the result to the client stub
(machine A).
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
9. Remote Procedure Call
Remote Procedure Call (RPC) is an interprocess communication
technique.
It is used for client-server applications.
RPC mechanisms are used when a computer program causes a
procedure or subroutine to execute in a different address space,
which is coded as a normal procedure call without the programmer
specifically coding the details for the remote interaction.
This procedure call also manages low-level transport protocol, such
as User Datagram Protocol, Transmission Control Protocol/Internet
Protocol etc.
It is used for carrying the message data between programs. The Full
form of RPC is Remote Procedure Call.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
11. Remote Procedure Call
The Remote Procedure Call (RPC) facility emerged out of this need.
It is a special case of the general message-passing model of IPC.
Providing the programmers with a familiar mechanism for building
distributed systems is one of the primary motivations for developing
the RPC facility.
While the RPC facility is not a universal panacea for all types of
distributed applications, it does provide a valuable communication
mechanism that is suitable for building a fairly large number of
distributed applications.
The RPC has become a widely accepted IPC mechanism in
distributed systems.
The popularity of RPC as the primary communication mechanism for
distributed applications is due to its following features:
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
12. Remote Procedure Call
1. Simple call syntax.
2. Familiar semantics (because of its similarity to local procedure
calls).
3. Its specification of a well-defined interface. This property is used to
support compile-time type checking and automated interface
generation.
4. Its ease of use. The clean and simple semantics of a procedure call
makes it easier to build distributed computations and to get them right.
5. Its efficiency. Procedure calls are simple enough for communication
to be quite rapid.
6. It can be used as an IPC mechanism to communicate between
processes on different machines as well as between different processes
on the same machine.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
13. Remote Procedure Call
The RMI (Remote Method Invocation) is an API that provides a
mechanism to create distributed application in java.
The RMI allows an object to invoke methods on an object running in
another JVM.
The RMI provides remote communication between the applications
using two objects stub and skeleton.
Understanding stub and skeleton
RMI uses stub and skeleton object for communication with the
remote object.
A remote object is an object whose method can be invoked from
another JVM.
Let's understand the stub and skeleton objects:
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
14. Remote Procedure Call
Stub
The stub is an object, acts as a gateway for the client side. All the
outgoing requests are routed through it.
It resides at the client side and represents the remote object.
When the caller invokes method on the stub object, it does the
following tasks:
It initiates a connection with remote Virtual Machine (JVM),
It writes and transmits (marshals) the parameters to the remote
Virtual Machine (JVM),
It waits for the result
It reads (unmarshals) the return value or exception, and
It finally, returns the value to the caller.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
15. Remote Procedure Call
Skeleton
The skeleton is an object, acts as a gateway for the server side object.
All the incoming requests are routed through it.
When the skeleton receives the incoming request, it does the
following tasks:
It reads the parameter for the remote method
It invokes the method on the actual remote object, and
It writes and transmits (marshals) the result to the caller.
In the Java 2 SDK, an stub protocol was introduced that eliminates
the need for skeletons.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
17. Remote Procedure Call
What is marshalling in RPC?
Remote Procedure Call (RPC) is a client-server mechanism that
enables an application on one machine to make a procedure call to
code on another machine.
The client calls a local procedure—a stub routine—that packs its
arguments into a message and sends them across the network to a
particular server process.
The client-side stub routine then blocks. Meanwhile, the server
unpacks the message, calls the procedure, packs the return results
into a message, and sends them back to the client stub.
The client stub unblocks, receives the message, unpacks the results
of the RPC, and returns them to the caller. This packing of arguments
is sometimes called marshaling.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
18. Remote Procedure Call
The sequence of events in a remote procedure call are given as
follows −
The client stub is called by the client.
The client stub makes a system call to send the message to the server
and puts the parameters in the message.
The message is sent from the client to the server by the client’s
operating system.
The message is passed to the server stub by the server operating
system.
The parameters are removed from the message by the server stub.
Then, the server procedure is called by the server stub.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
20. Remote Procedure Call
THE RPC MODEL
The RPC model is similar to the well-known and well-understood
procedure call model used for the transfer of control and data within
a program in the following manner:
1. For making a procedure call, the caller places arguments to the
procedure in some well-specified location.
2. Control is then transferred to the sequence of instructions that
constitutes the body of the procedure.
3. The procedure body is executed in a newly created execution
environment that includes copies of the arguments given in the
calling instruction.
4. After the procedure's execution is over, control returns to the
calling point, possibly returning a result.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
22. Remote Procedure Call
The RPC mechanism is an extension of the procedure call
mechanism in the sense that it enables a call to be made to a
procedure that does not reside in the address space of the calling
process.
The called procedure (commonly called remote procedure) may be
on the same computer as the calling process or on a different
computer.
In case of RPC, since the caller and the callee processes have disjoint
address spaces (possibly on different computers), the remote
procedure has no access to data and variables of the caller's
environment.
Therefore the RPC facility uses a message-passing scheme for
information exchange between the caller and the callee processes
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
23. Remote Procedure Call
1. The caller (commonly known as client process) sends a call
(request) message to the callee (commonly known as server process)
and waits (blocks) for a reply message. The request message contains
the remote procedure's parameters, among other things.
2. The server process executes the procedure and then returns the
result of procedure execution in a reply message to the client process.
3. Once the reply message is received, the result of procedure
execution is extracted, and the caller's execution is resumed.
The server process is normally dormant, awaiting the arrival of a
request message.
When one arrives, the server process extracts the procedure's
parameters, computes the result, sends a reply message, and then
awaits the next call message.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
25. Remote Procedure Call
Note that in this model of RPC, only one of the two processes is
active at any given time.
However, in general, the RPC protocol makes no restrictions on the
concurrency model implemented, and other models of RPC are
possible depending on the details of the parallelism of the caller's and
callee's environments and the RPC implementation.
For example, an implementation may choose to have RPC calls to be
asynchronous, so that the client may do useful work while waiting
for the reply from the server.
Another possibility is to have the server create a thread to process an
incoming request, so that the server can be free to receive other
requests
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
26. Remote Procedure Call
Example: RPC
Distributed Computing Environment.
Open Software Foundation.
Middleware between existing network operating system and
distributed application.
Initially design for Unix-Win NT.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
27. Remote Procedure Call
How RPC Works?
RPC architecture has mainly five components of the program:
Client
Client Stub
RPC Runtime
Server Stub
Server
Following steps take place during RPC process:
Step 1) The client, the client stub, and one instance of RPC run time
execute on the client machine.
Step 2) A client starts a client stub process by passing parameters in
the usual way.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
28. Remote Procedure Call
The client stub stores within the client’s own address space. It also asks the
local RPC Runtime to send back to the server stub.
Step 3) In this stage, RPC accessed by the user by making regular Local
Procedural Cal. RPC Runtime manages the transmission of messages between
the network across client and server. It also performs the job of retransmission,
acknowledgment, routing, and encryption.
Step 4) After completing the server procedure, it returns to the server stub,
which packs (marshalls) the return values into a message. The server stub then
sends a message back to the transport layer.
Step 5) In this step, the transport layer sends back the result message to the
client transport layer, which returns back a message to the client stub.
Step 6) In this stage, the client stub demarshalls (unpack) the return parameters,
in the resulting packet, and the execution process returns to the caller.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
30. Remote Procedure Call
The steps in making a RPC:-
Client procedure calls the client stub in a normal way.
Client stub builds a message and traps to the kernel.
Kernel sends the message to remote kernel.
Remote kernel gives the message to server stub.
Server stub unpacks parameters and calls the server.
Server computes results and returns it to server stub.
Server stub packs results in a message to client and traps to kernel.
Remote kernel sends message to client stub.
Client kernel gives message to client stub.
Client stub unpacks results and returns to client.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
31. Remote Procedure Call
Advantages of RPC
RPC method helps clients to communicate with servers by the
conventional use of procedure calls in high-level languages.
RPC method is modeled on the local procedure call, but the called
procedure is most likely to be executed in a different process and
usually a different computer.
RPC supports process and thread-oriented models.
RPC makes the internal message passing mechanism hidden from the
user.
The effort needs to re-write and re-develop the code is minimum.
Remote procedure calls can be used for the purpose of distributed
and the local environment.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
32. Remote Procedure Call
Advantages of RPC
It commits many of the protocol layers to improve performance.
RPC provides abstraction. For example, the message-passing nature
of network communication remains hidden from the user.
RPC allows the usage of the applications in a distributed
environment that is not only in the local environment.
With RPC code, re-writing and re-developing effort is minimized.
Process-oriented and thread-oriented models support by RPC.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
33. Remote Procedure Call
Disadvantages of RPC
Remote Procedure Call Passes Parameters by values only and pointer
values, which is not allowed.
Remote procedure calling (and return) time (i.e., overheads) can be
significantly lower than that for a local procedure.
This mechanism is highly vulnerable to failure as it involves a
communication system, another machine, and another process.
RPC concept can be implemented in different ways, which is can’t
standard.
Not offers any flexibility in RPC for hardware architecture as It is
mostly interaction-based.
The cost of the process is increased because of a remote procedure
call.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
34. TRANSPARENCY OF RPC
A major issue in the design of an RPC facility is its transparency
property.
A transparent RPC mechanism is one in which local procedures and
remote procedures are (effectively) indistinguishable to programmers.
This requires the following two types of transparencies.
1. Syntactic transparency means that a remote procedure call should
have exactly the same syntax as a local procedure call.
2. Semantic transparency means that the semantics of a remote
procedure call are identical to those of a local procedure call.
It is not very difficult to achieve syntactic transparency of an RPC
mechanism, and we have seen that the semantics of remote procedure
calls are also analogous to that of local procedure calls for most parts:
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
35. TRANSPARENCY OF RPC
The calling process is suspended until the called procedure returns.
The caller can pass arguments to the called procedure (remote
procedure).
The called procedure (remote procedure) can return results to the
caller.
Unfortunately, achieving exactly the same semantics for remote
procedure calls as for local procedure calls is close to impossible.
Remote procedure calls consume much more time (100-1000 times
more) than local procedure calls.
This is mainly due to the involvement of a communication network in
RPCs.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
36. STUB GENERATION
1. Manually:- In this method, the RPC implementor provides a set of
translation functions from which a user can construct his or her own
stubs.
This method is simple to implement and can handle very complex
parameter types.
2. Automatically:- This is the more commonly used method for stub
generation.
It uses Interface Definition Language (IDL) that is used to define the
interface between a client and a server.
An interface definition is mainly a list of procedure names supported
by the interface, together with the types of their arguments and
results.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
37. STUB GENERATION
This is sufficient information for the client and server to independently
perform compile-time type checking and to generate appropriate calling
sequences.
However, an interface definition also contains other information that helps
RPC reduce data storage and the amount of data transferred over the network.
For example, an interface definition has information to indicate whether each
argument is input, output, or both-only input arguments need be copied from
client to server and only output arguments need be copied from server to
client.
Similarly, an interface definition also has information about type definitions,
enumerated types, and defined constants that each side uses to manipulate
data from RPC calls, making it unnecessary for both the client and the server
to store this information separately.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
38. STUB GENERATION
A server program that implements procedures in an interface is said to
export the interface, and a client program that calls procedures from
an interface is said to import the interface.
When writing a distributed application, a programmer first writes an
interface definition using the IDL.
He or she can then write the client program that imports the interface
and the server program that exports the interface.
The interface definition is processed using an IDL compiler to
generate components that can be combined with client and server
programs, without making any changes to the existing compilers.
An IDL compiler can be designed to process interface definitions for
use with different languages, enabling clients and servers written in
different languages, to communicate by using remote procedure calls.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
39. RPC MESSAGES
Any remote procedure call involves a client process and a server
process that are possibly located on different computers.
The mode of interaction between the client and server is that the client
asks the server to execute a remote procedure and the server returns
the result of execution of the concerned procedure to the client.
Based on this mode of interaction, the two types of messages involved
in the implementation of an RPC system are as follows:
1. Call messages that are sent by the client to the server for requesting
execution of a particular remote procedure.
2. Reply messages that are sent by the server to the client for
returning the result of remote procedure execution.
The protocol of the concerned RPC system defines the format of these
two types of messages.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
40. RPC MESSAGES
1. Call Messages:- Since a call message is used to request execution of a
particular remote procedure, the two basic components necessary in a
call message are as follows:
1. The identification information of the remote procedure to be executed.
2. The arguments necessary for the execution of the procedure
In addition to these two fields, a call message normally has the following
fields:
3. A message identification field that consists of a sequence number.
This field is useful in two ways-for identifying lost messages and
duplicate messages in case of system failures and for properly matching
reply messages to outstanding call messages, especially in those cases
where the replies of several outstanding call messages arrive out of
order.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
41. RPC MESSAGES
4. A message type field that is used to distinguish call messages from
reply messages. For example, in an RPC system, this field may be set to
0 for all call messages and set to 1 for all reply messages.
5. A client identification field that may be used for two purposes-to allow
the server of the RPC to identify the client to whom the reply message
has to be returned and to allow the server to check the authentication of
the client process for executing the concerned procedure.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
42. RPC MESSAGES
2. Reply Messages:-
When the server of an RPC receives a call message from a client, it
could be faced with one of the following conditions.
In the list below, it is assumed for a particular condition that no problem
was detected by the server for any of the previously listed conditions:
1. The server finds that the call message is not intelligible to it. This may
happen when a call message violates the RPC protocol. Obviously the
server will reject such calls.
2. The server detects by scanning the client's identifier field that the
client is not authorized to use the service. The server will return an
unsuccessful reply without bothering to make an attempt to execute the
procedure.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
43. RPC MESSAGES
3. The server finds that the remote program, version, or procedure
number specified in the remote procedure identifier field of the call
message is not available with it. Again the server will return an
unsuccessful reply without bothering to make an attempt to execute the
procedure.
4. If this stage is reached, an attempt will be made to execute the remote
procedure specified in the call message. Therefore it may happen that the
remote procedure is not able to decode the supplied arguments. This may
happen due to an incompatible RPC interface being used by the client
and server.
5. An exception condition (such as division by zero) occurs while
executing the specified remote procedure.
6. The specified remote procedure is executed successfully.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
44. RPC MESSAGES
Obviously, in the first five cases, an unsuccessful reply has to be sent to
the client with the reason for failure in processing the request and a
successful reply has to be sent in the sixth case with the result of
procedure execution.
Therefore the format of a successful reply message and an unsuccessful
reply message is normally slightly different.
The message identifier field of a reply message is the same as that of its
corresponding call message so that a reply message can be properly
matched with its call message.
A typical RPC reply message format for successful and unsuccessful
replies may be of the form shown in Figure 4.4.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
46. RPC MESSAGES
The message type field is properly set to indicate that it is a reply
message.
For a successful reply, the reply status field is normally set to zero and is
followed by the field containing the result of procedure execution.
For an unsuccessful reply, the reply status field is either set to 1 or to a
nonzero value to indicate failure. In the latter case, the value of the reply
status field indicates the type of error.
However, in either case, normally a short statement describing the reason
for failure is placed in a separate field following the reply status field.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
47. MARSHALING ARGUMENTS AND RESULTS
Implementation of remote procedure calls involves the transfer of
arguments from the client process to the server process and the
transfer of results from the server process to the client process.
These arguments and results are basically language-level data
structures (program objects), which are transferred in the form of
message data between the two computers involved in the call.
For RPCs this operation is known as marshaling and basically
involves the following actions:
1. Taking the arguments (of a client process) or the result (of a server
process) that will form the message data to be sent to the remote
process.
2. Encoding the message data of step 1 above on the sender's
computer.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
48. MARSHALING ARGUMENTS AND RESULTS
This encoding process involves the conversion of program objects
into a stream form that is suitable for transmission and placing them
into a message buffer.
3. Decoding of the message data on the receiver's computer. This
decoding process involves the reconstruction of program objects
from the message data that was received in stream form.
In order that encoding and decoding of an RPC message can be
performed successfully, the order and the representation method
(tagged or untagged) used to marshal arguments and results must be
known to both the client and the server of the RPC.
This provides a degree of type safety between a client and a server
because the server will not accept a call from a client until the client
uses the same interface definition as the server.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
49. SERVER MANAGEMENT
In RPC-based applications, two important issues that need to be
considered for server management are
1. Server implementation and
2. Server creation.
Server Implementation
Based on the style of implementation used, servers may be of two
types:
1. Stateful and
2. Stateless.
Stateful Servers
A stateful server maintains clients' state information from one remote
procedure call to the next.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
50. SERVER MANAGEMENT
That is, in case of two subsequent calls by a client to a stateful server,
some state information pertaining to the service performed for the
client as a result of the first call execution is stored by the server
process.
These clients' state information is subsequently used at the time of
executing the second call.
Stateless Servers
A stateless server does not maintain any client state information.
Therefore every request from a client must be accompanied with all
the necessary parameters to successfully carry out the desired
operation.
For example, a server for byte stream files that allows the following
operations on files is stateless.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
51. SERVER MANAGEMENT
The stateful server can remember client data from one request to the
next, the stateless server keeps no information.
A stateful server is a server which maintains a state of data between
simultaneous access.
FTP / SMTP / Telnet servers are stateful servers, because it knows
who you are once you have logged in and can track you.
HTTP server on the other hand is stateless unless used with an
application layer which can utilize sessions to load a state for the
user.
For example a PHP backend can use sessions to give user a state or
provide login facilities over a HTTP server.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
52. COMMUNICATION PROTOCOLS FOR RPCs
Different systems, developed on the basis of remote procedure calls,
have different IPC requirements.
Based on the needs of different systems, several communication
protocols have been proposed for use in RPCs.
1. The Request Protocol
2. The Request/Reply Protocol
3. The Request/Reply/Acknowledge Reply Protocol
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
53. COMMUNICATION PROTOCOLS FOR RPCs
1. The Request Protocol:
This protocol is also known as the R (request) protocol.
It is used in RPCs in which the called procedure has nothing to return
as the result of procedure execution and the client requires no
confirmation that the procedure has been executed.
Since no acknowledgment or reply message is involved in this
protocol, only one message per call is transmitted.
The client normally proceeds immediately after sending the request
message as there is no need to wait for a reply message.
The protocol provides may-be call semantics and requires no
retransmission of request messages.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
55. COMMUNICATION PROTOCOLS FOR RPCs
An RPC that uses the R protocol is called asynchronous RPC.
An asynchronous RPC helps in improving the combined
performance of both the client and the server in those distributed
applications in which the client does not need a reply to each request.
Client performance is improved because the client is not blocked and
can immediately continue to do other work after making the call.
On the other hand, server performance is improved because the
server need not generate and send any reply for the request.
One such application is a distributed window system.
A distributed window system, such as X-11 [Davison et al. 1992], is
programmed as a server, and application programs wishing to display
items in windows on a display screen are its clients.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
56. COMMUNICATION PROTOCOLS FOR RPCs
To display items in a window, a client normally sends many requests
(each request containing a relatively small amount of information for
a small change in the displayed information) to the server one after
another without waiting for a reply for each of these requests because
it does not need replies for the requests.
Notice that for an asynchronous RPC, the RPCRuntime does not take
responsibility for retrying a request in case of communication failure.
This means that if an unreliable datagram transport protocol such as
UDP is used for the RPC, the request message could be lost without
the client's knowledge.
Applications using asynchronous RPC with unreliable transport
protocol must be prepared to handle this situation.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
57. COMMUNICATION PROTOCOLS FOR RPCs
2. The Request/Reply Protocol:
This protocol is also known as the RR (request/reply) protocol.
It is useful for the design of systems involving simple RPCs.
A simple RPC is one in which all the arguments as well as all the
results fit in a single packet buffer and the duration of a call and the
interval between calls arc both short (less than the transmission time
for a packet between the client and server).
The protocol is based on the idea of using implicit acknowledgment
to eliminate explicit acknowledgment messages.
Therefore in this protocol:
A server's reply message is regarded as an acknowledgment of the
client's request message.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
58. COMMUNICATION PROTOCOLS FOR RPCs
A subsequent call packet from a client is regarded as an
acknowledgment of the server's reply message of the previous call
made by that client.
The RR protocol in its basic form does not possess failure-handling
capabilities.
Therefore to take care of lost messages, the timeouts-and-retries
technique is normally used along with the RR protocol.
In this technique, a client retransmits its request message if it does
not receive the response message before a predetermined timeout
period elapses.
Obviously, if duplicate request messages are not filtered out, the RR
protocol, compounded with this technique, provides at-least-once call
semantics.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
60. COMMUNICATION PROTOCOLS FOR RPCs
3. The Request/Reply/Acknowledge Reply Protocol:
This protocol is also known as the RRA (request reply/acknowledge-
reply) protocol
The implementation of exactly-once call semantics with RR protocol
requires the server to maintain a record of the replies in its reply
cache.
In situations where a server has a large number of clients, this may
result in servers needing to store large quantities of information.
In some implementations, servers restrict the quantity of such data by
discarding it after a limited period of time.
However, this approach is not fully reliable because sometimes it
may lead to the loss of those replies that have not yet been
successfully delivered to their clients.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
62. COMMUNICATION PROTOCOLS FOR RPCs
To overcome this limitation of the RR protocol, the RRA protocol is
used, which requires clients to acknowledge the receipt of reply
messages.
The server deletes an information from its reply cache only after
receiving an acknowledgment for it from the client.
As shown in Figure 4.9, the RRA protocol involves the transmission
of three messages per call (two from the client to the server and one
from the server to the client).
In the RRA protocol, there is a possibility that the acknowledgment
message may itself get lost.
Therefore implementation of the RRA protocol requires that the
unique message identifiers associated with request messages must be
ordered.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
63. COMMUNICATION PROTOCOLS FOR RPCs
Each reply message contains the message identifier of the
corresponding request message, and each acknowledgment message
also contains the same message identifier.
This helps in matching a reply with its corresponding request and an
acknowledgment with its corresponding reply.
A client acknowledges a reply message only if it has received the
replies to all the requests previous to the request corresponding to
this reply.
Thus an acknowledgment message is interpreted as acknowledging
the receipt of all reply messages corresponding to the request
messages with lower message identifiers.
Therefore the loss of an acknowledgment message is harmless.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
64. REMOTE METHOD INVOCATION
The RMI (Remote Method Invocation) is an API that provides a
mechanism to create distributed application in java.
The RMI allows an object to invoke methods on an object running in
another JVM.
The RMI provides remote communication between the applications
using two objects stub and skeleton.
RMI uses stub and skeleton object for communication with the
remote object.
A remote object is an object whose method can be invoked from
another JVM.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
65. REMOTE METHOD INVOCATION
Remote method invocation (RMI) is a generalization of RPC in an
object-oriented environment.
The object resides on the server’s machine that is different from the
client’s machine. This is known as remote object.
An object for which the instance of the data associated with it is
distributed across machines is known as a distributed object.
An example of a distributed object is an object that is replicated over
two or more machines.
A remote object is a special case of a distributed object where the
associated data are available on one remote machine.
To realize the scope of RMI recall the implementation of an RPC
using sockets.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
66. REMOTE METHOD INVOCATION
Stub
The stub is an object, acts as a gateway for the client side. All the
outgoing requests are routed through it.
It resides at the client side and represents the remote object. When
the caller invokes method on the stub object, it does the following
tasks:
1) It initiates a connection with remote Virtual Machine (JVM),
2) It writes and transmits (marshals) the parameters to the remote
Virtual Machine (JVM),
3) It waits for the result
4) It reads (unmarshals) the return value or exception, and
5) It finally, returns the value to the caller.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
67. REMOTE METHOD INVOCATION
Skeleton
The skeleton is an object, acts as a gateway for the server side object.
All the incoming requests are routed through it.
When the skeleton receives the incoming request, it does the
following tasks:
1) It reads the parameter for the remote method
2) It invokes the method on the actual remote object, and
3) It writes and transmits (marshals) the result to the caller.
4) In the Java 2 SDK, an stub protocol was introduced that eliminates
the need for skeletons.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
68. REMOTE METHOD INVOCATION
In RPC, objects are passed by value; thus, the current state of the
remote object is copied and passed from the server to the client,
necessary updates are done, and the modified state of the object is
sent back to the server.
If multiple clients try to concurrently access/update the remote object
by invoking methods in this manner, then the updates made by one
client may not be reflected in the updates made by another client,
unless such updates are serialized.
In addition, the propagation of multiple copies of the remote object
between the server and the various clients will consume significant
bandwidth of the network.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
69. REMOTE METHOD INVOCATION
Architecture of an RMI Application
In an RMI application, we write two programs, a server program
(resides on the server) and a client program (resides on the client).
Inside the server program, a remote object is created and reference of
that object is made available for the client (using the registry).
The client program requests the remote objects on the server and tries
to invoke its methods.
The following diagram shows the architecture of an RMI application.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
71. REMOTE METHOD INVOCATION
Transport Layer − This layer connects the client and the server. It
manages the existing connection and also sets up new connections.
Stub − A stub is a representation (proxy) of the remote object at
client. It resides in the client system; it acts as a gateway for the
client program.
Skeleton − This is the object which resides on the server side. stub
communicates with this skeleton to pass request to the remote object.
RRL(Remote Reference Layer) − It is the layer which manages the
references made by the client to the remote object.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
72. REMOTE METHOD INVOCATION
Working of an RMI Application
The following points summarize how an RMI application works −
When the client makes a call to the remote object, it is received by
the stub which eventually passes this request to the RRL.
When the client-side RRL receives the request, it invokes a method
called invoke() of the object remoteRef.
It passes the request to the RRL on the server side.
The RRL on the server side passes the request to the Skeleton (proxy
on the server) which finally invokes the required object on the server.
The result is passed all the way back to the client.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
73. REMOTE METHOD INVOCATION
Marshalling and Unmarshalling
Whenever a client invokes a method that accepts parameters on a
remote object, the parameters are bundled into a message before
being sent over the network.
These parameters may be of primitive type or objects. In case of
primitive type, the parameters are put together and a header is
attached to it.
In case the parameters are objects, then they are serialized. This
process is known as marshalling.
At the server side, the packed parameters are unbundled and then the
required method is invoked. This process is known as unmarshalling.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
74. EVENT NOTIFICATION
MANY OF TODAY'S information systems feature distributed
processing to achieve performance, parallelism, improved resource
utilization, proximity of processing to usage, and integration of
legacy systems.
Common to most distributed systems is the need for asynchronous,
many-to-many communication.
A distributed event notifier provides this type of communication
where publishers, subscribers, and the event service all reside on
distinct machines.
Event notification systems help establish a form of asynchronous
communication among distributed objects on heterogeneous
platforms and have numerous applications.
An example is the publish–subscribe middleware.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
75. EVENT NOTIFICATION
Consider the airfares that are regularly published by the different
airlines on the WWW.
You are planning a vacation in Hawaii, so you may want to be
notified of an event when the round-trip airfare from your nearest
airport to Hawaii drops below $400.
This illustrates the nature of publish–subscribe communication.
Here, you are the subscriber of the event.
Neither publishers nor subscribers are required to know anything
about one another, but communication is possible via a brokering
arrangement.
Such event notification schemes are similar to interrupts or
exceptions in a centralized environment. By definition, they are
asynchronous.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
76. EVENT NOTIFICATION
Here are a few other examples.
A smart home may send a phone call to its owner away from home
whenever the garage door is open, or there is a running faucet, or
there is a power outage.
In a collaborative work environment, processes can resume the next
phase of work, when everyone has completed the current phase of
the work—these can be notified as events.
In an intensive care unit of a medical facility, physicians can define
events for which they need notification.
Holders of stocks may want to be notified whenever the price of their
favorite stock goes up by more than 5%.
An airline passenger would like to be notified via an app in her
smartphone if the time of the connecting flight changes.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
78. EVENT NOTIFICATION
Apache River (originally Jini developed by Sun Microsystems)
provides event notification service for Java-based platforms. It
allows subscribers in one Java virtual machine
(JVM) to receive notification of events of interest from another JVM.
The essential components are as follows:
1. An event generator interface, where users may register their events
of interest.
2. A remote event listener interface that provides notification to the
subscribers by invoking the notify method. Each notification is an
instance of the remote event class. It is passed as an argument to the
notify method.
3. Third-party agents play the role of observers and help coordinate
the delivery of similar events to a group of subscribers.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
79. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Operating System
An operating system is a program that acts as an interface between a
user of a computer and the computer resources.
The purpose of an operating system is to provide an environment in
which a user may execute programs.
An operating system is an important part of almost every computer
system.
A computer system can roughly be divided into three components :
The hardware ( memory, CPU, arithmetic-logic unit, various bulk
storage, I/O, peripheral devices... )
Systems programs ( operating system, compilers, editors, loaders,
utilities...)
Application programs ( database systems, business programs... )
80. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
The core software components of an operating system are collectively
known as the kernel.
The kernel has unrestricted access to all of the resources on the system.
In early monolithic systems, each component of the operating system
was contained within the kernel, could communicate directly with any
other component, and had unrestricted system access.
While this made the operating system very efficient, it also meant that
errors were more difficult to isolate, and there was a high risk of
damage due to erroneous or malicious code.
In this kind of architecture, each layer communicates only with the
layers immediately above and below it, and lower-level layers provide
services to higher-level ones using an interface that hides their
implementation.
82. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Hardware
The hardware consists of the memory, CPU, arithmetic-logic unit, various
bulk storage devices, I/O, peripheral devices and other physical devices.
Kernel
In computing, the kernel is the central component of most computer
operating systems; it is a bridge between applications and the actual data
processing done at the hardware level.
The kernel's responsibilities include managing the system's resources (the
communication between hardware and software components).
Usually as a basic component of an operating system, a kernel can provide
the lowest-level abstraction layer for the resources (especially processors
and I/O devices) that application software must control to perform its
function.
83. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Types of Kernal
Monolithic kernel
Microkernels
Exokernels
Hybrid kernels
It typically makes these facilities available to application processes
through inter-process communication mechanisms and system calls.
Shell
A shell is a piece of software that provides an interface for users to an
operating system which provides access to the services of a kernel.
The name shell originates from shells being an outer layer of
interface between the user and the innards of the operating system.
84. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Operating system shells generally fall into one of two categories:
command-line and graphical.
Command-line shells provide a command-line interface (CLI) to the
operating system, while graphical shells provide a graphical user
interface (GUI).
In either category the primary purpose of the shell is to invoke or
"launch" another program; however, shells frequently have additional
capabilities such as viewing the contents of directories.
Types of shells
Korn shell
Bourne shell
C shell
POSIX shell
85. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Computer Operating System Functions
An operating system performs the following functions:
Memory management
Task or process management
Storage management
Device or input/output management
Kernel or scheduling
Memory Management
Memory management is the process of managing computer memory.
Computer memories are of two types: primary and secondary
memory. The memory portion for programs and software is allocated
after releasing the memory space.
86. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Memory management is important for the operating system involved
in multitasking wherein the OS requires switching of memory space
from one process to another.
Every single program requires some memory space for its execution,
which is provided by the memory management unit.
A CPU consists of two types of memory modules: virtual memory
and physical memory.
The virtual memory is RAM memory, and the physical memory is a
hard disk memory. An operating system manages the virtual memory
address spaces, and the assignment of real memory is followed by the
virtual memory address.
Before executing instructions, the CPU sends the virtual address to
the memory management unit.
87. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Subsequently, the MMU sends the physical address to the real
memory, and then the real memory allocates space for the programs
or data.
Task or Process Management
Process management is an instance of a program that is being
executed.
The process consists of a number of elements, such as an identifier,
program counter, memory pointer and context data, and so on. The
Process is actually an execution of those instructions.
There are two types of process methods: single process and
multitasking method. The single process method deals with a single
application running at a time.
The multitasking method allows multiple processes at a time.
88. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Storage Management
Storage management is a function of the operating system that
handles memory allocation of the data.
The system consists of different types of memory devices, such as
primary storage memory (RAM), secondary storage memory, (Hard
disk), and cache storage memory.
Instructions and data are placed in the primary storage or cache
memory, which is referenced by the running program.
However, the data is lost when the power supply cut off. The
secondary memory is a permanent storage device.
The operating system allocates a storage place when new files are
created and the request for memory access is scheduled.
89. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Device or Input/output Management
In computer architecture, the combination of CPU and main memory
is the brain of the computer, and it is managed by the input and
output resources.
Humans interact with the machines by providing information through
I/O devices.
The display, keyboard, printer, and mouse are I/O devices. The
management of all these devices affects the throughput of a system;
therefore, the input and output management of the system is a
primary responsibility of the operating system
90. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Scheduling
Scheduling by an operating system is a process of controlling and
prioritizing the messages sent to a processor.
The operating system maintains a constant amount of work for the
processor and thus balances the workload.
As a result, each process is completed within a stipulated time frame.
Hence, scheduling is very important in real-time systems. The
schedulers are mainly of three types:
Long term scheduler
Short term scheduler
Medium-term schedule
91. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Character User Interface Operating System (CUI)
The CUI operating system is a text-based operating system, which is
used for interacting with the software or files by typing commands to
perform specific tasks.
The command-line operating system uses only the keyboard to enter
commands.
The command-line operating systems include DOS and UNIX.
The advanced command-line operating system is faster than the
advanced GUI operating system.
92. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Graphical User Interface Operating System (GUI)
The graphical mode interface operating system is a mouse-based
operating system (Windows Operating System, LINUX), wherein a
user performs the tasks or operations without typing the commands
from the keyboard.
The files or icons can be opened or closed by clicking them with a
mouse button.
In addition to this, the mouse and keyboard are used to control the
GUI operating systems for several purposes.
Most of the embedded-based projects are developed on this operating
system. The advanced GUI operating system is slower than the
command line operating system.
93. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Distributed Operating Systems
A distributed operating system is the modern enhancement in the
computer domain.
This type of system is extensively used all across the world along
with an extreme pace.
Different independent interconnected computers will have
communication across them through this distributed operating
system.
Every autonomous system holds its own processing and memory
units.
These systems are also termed loosely coupled systems and they
have various sizes and operations.
94. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
These are referred to as loosely coupled systems or distributed
systems.
These system’s processors differ in size and function.
The major benefit of working with these types of the operating
system is that it is always possible that one user can access the files
or software which are not actually present on his system but some
other system connected within this network i.e., remote access is
enabled within the devices connected in that network.
A distributed operating system is system software over a collection of
independent, networked, communicating, and physically separate
computational nodes.
96. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
They handle jobs which are serviced by multiple CPUs.
Each individual node holds a specific software subset of the global
aggregate operating system.
A distributed OS provides the essential services and functionality
required of an OS but adds attributes and particular configurations to
allow it to support additional requirements such as increased scale
and availability.
To a user, a distributed OS works in a manner similar to a single-
node, monolithic operating system.
That is, although it consists of multiple nodes, it appears to users and
applications as a single-node.
97. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Types of Distributed Operating System
There are various types of Distributed Operating systems. Some of
them are as follows:
Client-Server Systems
Peer-to-Peer Systems
Middleware
Three-tier
N-tier
98. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Features of Distributed Operating System
There are various features of the distributed operating system. Some of
them are as follows:
Openness
It means that the system's services are freely displayed through
interfaces. Furthermore, these interfaces only give the service syntax.
For example, the type of function, its return type, parameters, and so on.
Interface Definition Languages are used to create these interfaces (IDL).
Scalability
It refers to the fact that the system's efficiency should not vary as new
nodes are added to the system. Furthermore, the performance of a
system with 100 nodes should be the same as that of a system with 1000
nodes.
99. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Resource Sharing
Its most essential feature is that it allows users to share resources.
They can also share resources in a secure and controlled manner.
Printers, files, data, storage, web pages, etc., are examples of shared
resources.
Flexibility
A DOS's flexibility is enhanced by modular qualities and delivers a more
advanced range of high-level services.
The kernel/ microkernel's quality and completeness simplify the
implementation of such services.
Fault Tolerance
Fault tolerance is that process in which user may continue their work if
the software or hardware fails.
100. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Transparency
It is the most important feature of the distributed operating system.
The primary purpose of a distributed operating system is to hide the fact
that resources are shared.
Transparency also implies that the user should be unaware that the
resources he is accessing are shared.
Furthermore, the system should be a separate independent unit for the
user.
Heterogeneity
The components of distributed systems may differ and vary in operating
systems, networks, programming languages, computer hardware, and
implementations by different developers.
101. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Advantages and Disadvantages of Distributed Operating System
There are various advantages and disadvantages of the distributed
operating system. Some of them are as follows:
Advantages
There are various advantages of the distributed operating system. Some
of them are as follow:
It may share all resources (CPU, disk, network interface, nodes,
computers, and so on) from one site to another, increasing data
availability across the entire system.
It reduces the probability of data corruption because all data is replicated
across all sites; if one site fails, the user can access data from another
operational site.
102. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
The entire system operates independently of one another, and as a result,
if one site crashes, the entire system does not halt.
It increases the speed of data exchange from one site to another site.
It is an open system since it may be accessed from both local and remote
locations.
It helps in the reduction of data processing time.
Most distributed systems are made up of several nodes that interact to
make them fault-tolerant. If a single machine fails, the system remains
operational.
Disadvantages
The system must decide which jobs must be executed when they must be
executed, and where they must be executed. A scheduler has limitations,
which can lead to underutilized hardware and unpredictable runtimes.
103. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
It is hard to implement adequate security in DOS since the nodes and
connections must be secured.
The database connected to a DOS is relatively complicated and hard to
manage in contrast to a single-user system.
The underlying software is extremely complex and is not understood
very well compared to other systems.
The more widely distributed a system is, the more communication
latency can be expected. As a result, teams and developers must choose
between availability, consistency, and latency.
These systems aren't widely available because they're thought to be too
expensive.
Gathering, processing, presenting, and monitoring hardware use metrics
for big clusters can be a real issue.
104. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Network Operating System –
These systems run on a server and provide the capability to manage data,
users, groups, security, applications, and other networking functions.
These types of operating systems allow shared access of files, printers,
security, applications, and other networking functions over a small
private network.
One more important aspect of Network Operating Systems is that all the
users are well aware of the underlying configuration, of all other users
within the network, their individual connections, etc. and that’s why
these computers are popularly known as tightly coupled systems.
106. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Architectures of Operating System
Mainly, there are 4 types of architectures of operating system:
Monolithic architecture : In the monolithic systems, each component of
the operating system is contained within the kernel.
Layered architecture : This is an important architecture of operating
system which is meant to overcome the disadvantages of early
monolithic systems
Microkernel architecture : In microkernel architecture, only the most
important services are put inside the kernel and rest of the OS service are
present in the system application program.
Hybrid architecture : Combine the best functionalities of all these
approaches and hence this design is termed as the hybrid structured
operating system
107. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
In monolithic systems, each component of the operating system was
contained within the kernel, could communicate directly with any other
component, and had unrestricted system access.
While this made the operating system very efficient, it also meant that
errors were more difficult to isolate, and there was a high risk of damage
due to erroneous or malicious code.
The entire operating system works in the kernel space in the monolithic
system.
This increases the size of the kernel as well as the operating system.
This is different than the microkernel system where the minimum
software that is required to correctly implement an operating system is
kept in the kernel.
110. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
As operating systems became larger and more complex, this approach
was largely abandoned in favour of a modular approach which grouped
components with similar functionality into layers to help operating
system designers to manage the complexity of the system.
In this kind of architecture, each layer communicates only with the layers
immediately above and below it, and lower-level layers provide services
to higher-level ones using an interface that hides their implementation.
The modularity of layered operating systems allows the
implementation of each layer to be modified without requiring any
modification to adjacent layers.
Although this modular approach imposes structure and consistency on
the operating system, simplifying debugging and modification.
111. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Also, because all layers still have unrestricted access to the system, the
kernel is still susceptible to errant or malicious code. Many of today’s
operating systems, including Microsoft Windows and Linux, implement
some level of layering.
112. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
A microkernel architecture includes only a very small number of services
within the kernel in an attempt to keep it small and scalable.
The services typically include low-level memory management, inter-
process communication and basic process synchronization to enable
processes to cooperate.
In microkernel designs, most operating system components, such as
process management and device management, execute outside the kernel
with a lower level of system access.
Microkernels are highly modular, making them extensible, portable and
scalable. Operating system components outside the kernel can fail
without causing the operating system to fall over.
Once again, the downside is an increased level of inter-module
communication which can degrade system performance.
114. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Hybrid Architecture of Operating System
All the architectures discussed so far have their own advantages and
disadvantages.
Monolithic systems are quite fast but their expansion is very difficult.
Layered structure gives an efficient division of functionalities but if
the number of layers is very high, it is difficult to manage the system.
Microkernel architecture is quite efficient in isolating the core
functionalities within the microkernel but the other services which
are outside the kernel are not properly integrated.
And hence the idea was to combine the best functionalities of all
these approaches and hence this design is termed as the hybrid
structured operating system.
116. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
It contains three layers:
Hardware abstraction layer: It is the lowermost layer that acts as
an interface between the kernel and hardware.
Microkernel layer: This layer is nothing but the microkernel only
which comprises of the three basic functionalities i.e., CPU
scheduling, memory management, Inter-Process Communication.
Application layer: This layer is in user area and acts as an interface
between user space and microkernel layer. It comprises of the
remaining functionalities like file server, error detection, I/O device
management etc
In this way, the modular approach of microkernel structure and the
layered approach both are restored, keeping the no. of layers easy to
manage..
117. Operating System Architecture
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
So, hybrid approach is highly useful and is largely used in present-
day operating systems.
Most of the Mach operating systems run on this hybrid architecture
only.
Advantages:
Easy to manage due to layered approach.
Number of layers is not very high.
Kernel is small and isolated.
Improved security and protection.
118. Operating System Protection
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Protection refers to a mechanism which controls the access of
programs, processes, or users to the resources defined by a computer
system.
We can take protection as a helper to multi programming operating
system, so that many users might safely share a common logical
name space such as directory or files.
Need of Protection:
To prevent the access of unauthorized users and
To ensure that each active programs or processes in the system uses
resources only as the stated policy,
To improve reliability by detecting latent errors.
119. Operating System Protection
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Role of Protection:
The role of protection is to provide a mechanism that implement
policies which defines the uses of resources in the computer system.
Some policies are defined at the time of design of the system, some
are designed by management of the system and some are defined by
the users of the system to protect their own files and programs.
Every application has different policies for use of the resources and
they may change over time so protection of the system is not only
concern of the designer of the operating system.
Application programmer should also design the protection
mechanism to protect their system against misuse.
120. Process and Threads
What is a Process?
A process is the execution of a program that allows you to perform
the appropriate actions specified in a program.
It can be defined as an execution unit where a program runs.
The OS helps you to create, schedule, and terminates the processes
which is used by CPU.
The other processes created by the main process are called child
process.
A process operations can be easily controlled with the help of
PCB(Process Control Block).
You can consider it as the brain of the process, which contains all the
crucial information related to processing like process id, priority,
state, and contents CPU register, etc.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
121. Process and Threads
What is Thread?
Thread is an execution unit that is part of a process. A process can
have multiple threads, all executing at the same time.
It is a unit of execution in concurrent programming. A thread is
lightweight and can be managed independently by a scheduler.
It helps you to improve the application performance using
parallelism.
Multiple threads share information like data, code, files, etc. We can
implement threads in three different ways:
Kernel-level threads
User-level threads
Hybrid threads
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
122. Process and Threads
Properties of Process
Here are the important properties of the process:
Creation of each process requires separate system calls for each
process.
It is an isolated execution entity and does not share data and
information.
Processes use the IPC(Inter-Process Communication) mechanism for
communication that significantly increases the number of system
calls.
Process management takes more system calls.
A process has its stack, heap memory with memory, and data map.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
123. Process and Threads
Properties of Thread
Here are important properties of Thread:
Single system call can create more than one thread
Threads share data and information.
Threads shares instruction, global, and heap regions. However, it has
its register and stack.
Thread management consumes very few, or no system calls because
of communication between threads that can be achieved using shared
memory.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
124. Process and Threads
Thread Structure
Process is used to group resources together and threads are the
entities scheduled for execution on the CPU.
The thread has a program counter that keeps track of which
instruction to execute next.
It has registers, which holds its current working variables.
It has a stack, which contains the execution history, with one frame
for each procedure called but not yet returned from.
Although a thread must execute in some process, the thread and its
process are different concepts and can be treated separately.
Having multiple threads running in parallel in one process is
similarto having multiple processes running in parallel in one
computer.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
126. Process and Threads
In former case, the threads share an address space, open files, and other
resources.
In the latter case, process share physical memory, disks, printers and other
resources.
In Fig. (a), we see three traditional processes. Each process has its own
address space and a single thread of control.
In contrast, in Fig. (b), we see a single process with three threads of
control.
Although in both cases we have three threads, in Fig. (a) each of them
operates in a different address space, whereas in Fig.(b) all three of them
share the same address space.
Like a traditional process (i.e., a process with only one thread), a thread
can be in any one of several states: running, blocked, ready, or terminated.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
127. Process and Threads
When multithreading is present, processes normally start with a single
thread present.
This thread has the ability to create new threads by calling a library
procedure thread_create.
When a thread has finished its work, it can exit by calling a library
procedure thread_exit.
One thread can wait for a (specific) thread to exit by calling a procedure
thread_join. This procedure
blocks the calling thread until a (specific) thread has exited.
Another common thread call is thread_yield, which allows a thread to
voluntarily give up the CPU to let
another thread run.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
128. Process vs Threads
S.No Process Thread
1.
Process means any program is
in execution.
Thread means segment of a
process.
2.
Process takes more time to
terminate.
Thread takes less time to
terminate.
3.
It takes more time for
creation.
It takes less time for creation.
4.
It also takes more time for
context switching.
It takes less time for context
switching.
5.
Process is less efficient in
term of communication.
Thread is more efficient in term
of communication.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
129. Process vs Threads
S.No Process Thread
6.
Process consume more
resources.
Thread consume less resources.
7. Process is isolated. Threads share memory.
8.
Process is called heavy weight
process.
Thread is called light weight
process.
9.
Process switching uses
interface in operating system.
Thread switching does not require to
call a operating system and cause an
interrupt to the kernel.
5.
If one process is blocked then
it will not effect the execution
of other process
Second thread in the same task
couldnot run, while one server
thread is blocked.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
130. Distributed Shared Memory(DSM)
Distributed Shared Memory (DSM) implements the distributed systems
shared memory model in a distributed system, that hasn’t any physically
shared memory.
Shared model provides a virtual address area shared between any or all
nodes.
To beat the high forged of communication in distributed system.
DSM memo, model provides a virtual address area shared between all
nodes.
Systems move information to the placement of access.
Information moves between main memory and secondary memory (within
a node) and between main recollections of various nodes.
DSM refers to shared memory paradigm applied to loosely coupled
distributed memory systems
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
132. Distributed Shared Memory(DSM)
The distributed shared memory (DSM) implements the shared memory
model in distributed systems, which have no physical shared memory
The shared memory model provides a virtual address space shared
between all nodes
The overcome the high cost of communication in distributed systems,
DSM systems move data to the location of access
Data moves between main memory and secondary memory (within a node)
and between main memories of different nodes
Each data object is owned by a node :Initial owner is the node that created
object Ownership can change as object moves from node to node
When a process accesses data in the shared address space, the mapping
manager maps shared memory address to physical memory (local or
remote)
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
133. Distributed Shared Memory(DSM)
DSM paradigm provides process with shared address space
Primitives for shared memory:–
Read(address)–
Write(address , data)
Read returns the data item referenced by address, and write sets the
contents referenced by address to the value of data
Shared memory paradigm gives the systems illusion of physically shared
memory
DSM refers to shared memory paradigm applied to loosely coupled
distributed memory systems
Shared memory exists only virtually
Similar concept to virtual memory
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
134. Distributed Shared Memory(DSM)
DSM also known as DSVM
DSM provides a virtual address space shared among processes on loosely
coupled processors
DSM is basically an abstraction that integrates the local memory of
different machine into a single logical entity shared by cooperating
processes
The initial owner is that the node that created the object. possession will
amendment as the object moves from node to node.
Once a method accesses information within the shared address space, the
mapping manager maps shared memory address to physical memory (local
or remote).
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
136. Distributed Shared Memory(DSM)
GENERALARCHITEOURE OF DSM SYSTEMS
The- nodes are connected by a high-speed communication network.
A simple message-passing system allows processes on different
nodes to exchange messages with each other.
The DSM abstraction presents a large shared-memory space to the
processors of all nodes.
In contrast to the shared physical memory in tightly coupled parallel
architectures, the shared memory of DSM exists only virtually.
A software memory-mapping manager routine in each node maps the
local memory onto the shared virtual memory.
To facilitate the mapping operation, the shared-memory space is
partitioned into blocks.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
137. Distributed Shared Memory(DSM)
Data caching is a well-known solution to address memory access
latency.
The idea of data caching is used in DSM systems to reduce network
latency.
That is, the main memory of individual nodes is used to cache pieces
of the shared-memory space.
The memory-mapping manager of each node views its local memory
as a big cache of the shared-memory space for its associated
processors.
The basic unit of caching is a memory block.
When a process on a node accesses some data from a memory block
of the sharedmemory space, the local memory-mapping manager
takes charge of its request.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
138. Distributed Shared Memory(DSM)
If the memory block containing the accessed data is resident in the
local memory, the request is satisfied by supplying the accessed data
from the local memory.
Otherwise, a network block fault is generated and the control is
passed to the operating system.
The operating system then sends a message to the node on which the
desired memory block is located to get the block.
The missing block is migrated from the remote node to the client
process's node and the operating system maps it into the application's
address space.
The faulting instruction is then restarted and can now complete.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
139. Distributed Shared Memory(DSM)
Therefore, the scenario is that data blocks keep migrating from one
node to another on demand but no communication is visible to the
user processes.
That is, to the user processes, the system looks like a tightly coupled
shared-memory multiprocessors system in which multiple processes
freely read and write the shared-memory at will, Copies of data
cached in local memory eliminate network traffic for a memory
access on cache hit, that is, access to an address whose data is stored
in the cache.
Therefore, network traffic is significantly reduced if applications
show a high degree of locality of data accesses
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
140. Distributed Shared Memory(DSM)
DESIGN AND IMPLEMENTATION ISSUES OF DSM
Important issues involved in the design and implementation of DSM
systems are as follows:
1. Granularity:-
Granularity refers to the block size of a DSM system, that is, to the
unit of sharing and the unit of data transfer across the network when
a network block fault occurs.
Possible units are a few words, a page, or a few pages.
Selecting proper block size is an important part of the design of a
DSM system because block size is usually a measure of the
granularity of parallelism explored and the amount of network traffic
generated by network block faults.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
141. Distributed Shared Memory(DSM)
2. Structure of shared-memory space:-
Structure refers to the layout of the shared data in memory.
The structure of the shared-memory space of a DSM system is
normally dependent on the type of applications that the DSM system
is intended to support.
3. Memory coherence and access synchronization:-
In a DSM system that allows replication of shared data items, copies
of shared data items may simultaneously be available in the main
memories of a number of nodes.
In this case, the main problem is to solve the memory coherence
problem that deals with the consistency of a piece of shared data
lying in the main memories of two or more nodes.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
142. Distributed Shared Memory(DSM)
In a DSM system, concurrent accesses to shared data may be
generated.
Therefore, a memory coherence protocol alone is not sufficient to
maintain the consistency of shared data.
In addition, synchronization primitives, such as semaphores, event
count, and lock, are needed to synchronize concurrent accesses to
shared data.
4. Data location and access:- To share data in a DSM system, it
should be possible to locate and retrieve the data accessed by a user
process.
Therefore, a DSM system must implement some form of data block
locating mechanism in order to service network data block faults to
meet the requirement of the memory coherence semantics being used.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
143. Distributed Shared Memory(DSM)
5. Replacement strategy:-
If the local memory of a node is full, a cache miss at that node implies
not only a fetch of the accessed data block from a remote node but
also a replacement.
That is, a data block of the local memory must be replaced by the new
data block.
Therefore, a cache replacement strategy is also necessary in the design
of a DSM system.
6. Thrashing:-
In a DSM system, data blocks migrate between nodes on demand.
Therefore, if two nodes compete for write access to a single data item,
the corresponding data block may be transferred back and forth at such
a high rate that no real work can get done.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
144. Distributed Shared Memory(DSM)
A DSM system must use a policy to avoid this situation (usually known
as thrashing).
7. Heterogeneity:-
The DSM systems built for homogeneous systems need not address the
heterogeneity issue.
However, if die underlying system environment is heterogeneous, the
DSM system must be designed to take care of heterogeneity so that it
functions properly with machines having different architectures.
Disadvantages of DSM-
Could cause a performance penalty
Should provide for protection against simultaneous access to shared data
such as lock
Performance of irregular problems could be difficult.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
145. Distributed Shared Memory(DSM)
Advantages of DSM-
System scalable
Hides the message passing
Can handle complex and large data bases without replication or
sending the data to processes
DSM is usually cheaper than using multiprocessor system
No memory access bottleneck, as no single bus
DSM provides large virtual memory space
DSM programs portable as they use common DSM programming
interface
Shields programmer from sending or receive primitives
DSM can (possibly) improve performance by speeding up data access.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
146. Distributed Shared Memory(DSM)
Algorithm for implementing Distributed Shared Memory
Distributed shared memory(DSM) system is a resource management
component of distributed operating system that implements shared
memory model in distributed system which have no physically
shared memory. The shared memory model provides a virtual
address space which is shared by all nodes in a distributed system.
The central issues in implementing DSM are:
how to keep track of location of remote data.
how to overcome communication overheads and delays involved in
execution of communication protocols in system for accessing
remote data.
how to make shared data concurrently accessible at several nodes to
improve performance.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
147. Distributed Shared Memory(DSM)
1. Central Server Algorithm:
In this, a central server maintains all shared data.
It services read requests from other nodes by returning the data items
to them and write requests by updating the data and returning
acknowledgement messages.
Time-out cam be used in case of failed acknowledgement while
sequence number can be used to avoid duplicate write requests.
It is simpler to implement but the central server can become
bottleneck and to overcome this shared data can be distributed
among several servers.
This distribution can be by address or by using a mapping function to
locate the appropriate server.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
149. Distributed Shared Memory(DSM)
2. Migration Algorithm:
In contrast to central server algo where every data access request is
forwarded to location of data while in this data is shipped to location
of data access request which allows subsequent access to be
performed locally.
It allows only one node to access a shared data at a time and the
whole block containing data item migrates instead of individual item
requested.
It is susceptible to thrashing where pages frequently migrate between
nodes while servicing only a few requests.
This algo provides an opportunity to integrate DSM with virtual
memory provided by operating system at individual nodes.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
151. Distributed Shared Memory(DSM)
3. Read Replication Algorithm:
This extends the migration algorithm by replicating data blocks and
allowing multiple nodes to have read access or one node to have both
read write access.
It improves system performance by allowing multiple nodes to
access data concurrently.
The write operation in this is expensive as all copies of a shared
block at various nodes will either have to invalidated or updated with
the current value to maintain consistency of shared data block.
DSM must keep track of location of all copies of data blocks in this.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
153. Distributed Shared Memory(DSM)
4. Full Replication Algorithm:
It is an extension of read replication algorithm which allows multiple
nodes to have both read and write access to shared data blocks.
Since many nodes can write shared data concurrently, the access to
shared data must be controlled to maintain it’s consistency.
To maintain consistency, it can use a gap free sequences in which all
nodes wishing to modify shared data will send the modification to
sequencer which will then assign a sequence number and multicast
the modification with sequence number to all nodes that have a copy
of shared data item.
Mr. Sagar Pandya
sagar.pandya@medicaps.ac.in
155. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
The Common Object Request Broker Architecture (CORBA) is a
standard developed by the Object Management Group (OMG) to
provide interoperability among distributed objects.
CORBA is the world's leading middleware solution enabling the
exchange of information, independent of hardware platforms,
programming languages, and operating systems.
CORBA is essentially a design specification for an Object Request
Broker (ORB), where an ORB provides the mechanism required for
distributed objects to communicate with one another, whether locally
or on remote devices, written in different languages, or at different
locations on a network.
156. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Why we need of CORBA?
Incompatibility of systems during data transfer.
Collaboration between systems on different os, programming or
computing hardware was not possible.
157. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
History
In 1991, a specification for an object request broker architecture
known as CORBA (Common Object Request Broker Architecture)
was agreed by a group of companies.
This was followed in 1996 by the CORBA 2.0 specification, which
defined standards enabling implementations made by different
developers to communicate with one another.
These standards are called the General Inter-ORB protocol or GIOP.
It is intended that GIOP can be implemented over any transport layer
with connections.
The implementation of GIOP for the Internet uses the TCP protocol
and is called the Internet Inter-ORB Protocol or IIOP [OMG 2004a].
CORBA 3 first appeared in late 1999 and a component model has
been added recently.
158. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
The OMG (Object Management Group) was formed in 1989 with a view
to encouraging the adoption of distributed object systems in order to
gain the benefits of object-oriented programming for software
development and to make use of distributed systems, which were
becoming widespread.
To achieve its aims, the OMG advocated the use of open systems based
on standard object-oriented interfaces.
These systems would be built from heterogeneous hardware, computer
networks, operating systems and programming languages.
An important motivation was to allow distributed objects to be
implemented in any programming language and to be able to
communicate with one another. They therefore designed an interface
language that was independent of any specific implementation language.
159. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
CORBA is a middleware design that allows application programs to
communicate with one another irrespective of their programming
languages, their hardware and software platforms, the networks they
communicate over and their implementors.
Applications are built from CORBA objects, which implement
interfaces defined in CORBA’s interface definition language, IDL.
Clients access the methods in the IDL interfaces of CORBA objects
by means of RMI.
The middleware component that supports RMI is called the Object
Request Broker or ORB.
The specification of CORBA has been sponsored by members of the
Object Management Group (OMG).
160. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Many different ORBs have been implemented from the specification,
supporting a variety of programming languages including Java and
C++.
CORBA services provide generic facilities that may be of use in a
wide variety of applications.
They include the Naming Service, the Event and Notification
Services, the Security Service, the Transaction and Concurrency
Services and the Trading Service.
The CORBA architecture also allows for CORBA services – a set of
generic services that can be useful for distributed applications.
161. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Data communication from client to server is accomplished through a
well-defined object-oriented interface.
The Object Request Broker (ORB) determines the location of the
target object, sends a request to that object, and returns any response
back to the caller.
Through this object-oriented technology, developers can take
advantage of features such as inheritance, encapsulation,
polymorphism, and runtime dynamic binding.
These features allow applications to be changed, modified and re-
used with minimal changes to the parent interface.
The illustration below identifies how a client sends a request to a
server through the ORB:
162. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
163. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
CORBAArchitecture
OMA (Object Management Architecture) is a specification proposed
by the OMG for defining the constraints which a distributed, object-
oriented application should conform to.
Based on this specification the CORBA emerged.
We may distinguish 5 substantial components in this standard:
ORB - Object Request Broker
IDL - Interface Data Language
DII - Dynamic Invocation Interface
IR - Interface Repository
OA - Object Adapters
164. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
165. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
ORB - Object Request Broker
ORB is a fundamental part of the CORBA implementation.
It is software designed for ensuring the communication between
objects in the network.
In particular, it enables localization of remote objects, passing
arguments to methods and returning the results of remote calls.
It may redirect a request to another ORB agent.
CORBA defines general rules for inter-agent communication in a
language-independent manner as all the protocols are defined in the
IDL language.
166. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
IDL - Interface Data Language
The client as well as the server are separated from ORB with the IDL
layer.
Similarly as in remote RMI objects, CORBA objects are represented
by interfaces.
As CORBA allows for implementations in different programming
languages. there is a need for a universal, intermediate level language
(IDL) for specifying remote objects.
Its syntax is similar to Java or C++. However, it is not an ordinary
programming language. It is used only for specifying interfaces from
which the helper source code is generated (stubs, skeletons and other).
A special compiler is required to do this (it must be supplied by the
CORBA implementation provider).
167. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
Of course there must exist some mapping of the IDL language to the
target (implementation) language.
Until now, OMG has defined IDL mappings to the following
programming languages: Ada, C, C++, COBOL, Common Lisp, Java
and Smalltalk.
Several other organizations introduce more or less formal mappings
to other languages like: Perl, Python, TCL, Eiffel, Haskell, Erlang.
DII - Dynamic Invocation Interface
An interface may be a static one (known during the compilation
phase) or a dynamic one. A static interface is represented on the
client side by a stub (a generated class).
A dynamic interface allow clients to use CORBA objects which were
not known during the compilation of the client. This enables the DII.
168. Case Study - CORBA
Mr. Sagar Pandya sagar.pandya@medicaps.ac.in
IR - Interface Repository
A dynamic interface is stored in an interface repository - IR.
A client may obtain from the IR an interface which was not known
when the client was compiled.
Based on the interface a client may send a request to an object
represented by this interface.
OA - Object Adapters
Object adapters constitute an intermediate layer between ORB and
CORBA objects.