SlideShare una empresa de Scribd logo
1 de 11
Shriji                                                          JaySwaminarayan                                         Bapa   <br />   2010-2011<br />                             <br />                                                          Submitted By: <br />                               Jay J Bhatt<br />                                             Vishwanath Kamble<br />                                7th SEM ISE<br />                                                                  Srinivas Institute of Technology        <br />,[object Object]
The purpose of this paper is to understand how to develop the project to share the load on one system by process sharing using linux platform.If one system is executing some task and if more load is there on the system then using load sharing we can share the load
History             The days of supercomputers and mainframes dominating computing are over. With the cost benefits of the mass production of smaller machines, the majority of today’s computing power exists in the form of PCs or workstations, with more powerful machines performing more specialized tasks. Even with<br />increased computer power and availability, some tasks require more resources. Load balancing and process migration allocate processes to interconnected workstations on a network to better take advantage of available resources.<br />This paper presents a survey of process migration mechanisms. There have been other such survey papers on process migration but this paper is more current and includes one new algorithm, the<br />Freeze Free algorithm. <br />1 motivations for load balancing and process migration are presented in Section.<br />2. Section gives a brief overview of load balancing.<br />3 which describes process migration. The differences between process migration and load balancing are discussed, and several of the current process migration implementations and algorithms are examined.<br />4 presents future work, possible improvements, and extended applications of process migration.Finally, concluding remarks are given<br />,[object Object],                    Process migration is the act of transferring an active process between two machines and restoring the process from the point it left off on the selected destination node.The purpose of this is to provide for an enhanced degree of dynamic load distribution, fault resilience, eased system administration, and data access locality. The potential of process migration is great especially in a large network of personal workstations. In a typical working environment,there generally exists a large pool of idle workstations whose power is unharnessed if the potential of process migration is not tapped.Providing for a reliable and efficient migration module that helps handle inconsistencies in load distribution and allows for an effective usage of the system resource potential is highly warranted As high-performance facilities shift from supercomputers to networks of workstations, and with the ever-increasing role of the World Wide Web expect migration to play a more important role and eventually to be widely adopted. This paper reviews the field of process migration by summarizing the key concepts and giving an overview of the most import implementations and by providing a unique alternative implementation of our own. Design and implementation issues of process migration are analyzedin general, and these are used as pointers in describing our implementation. Figure 1: Process Migration and Mobility Process migration, which is the main theme of this paper primarily involves transferring both code and data across nodes. It also involves transfer of authorisation information, for instance access to a shared resource, but the scope is limited to being under the control of a single administration. Finally mobile agents involve transferring code, data and complete authorisation information,to act on the behalf of the owner on a wide scale, such as the World wide internet<br />,[object Object],         1) Harnessing resource locality Processes which are running on a node which is distant from the node which houses the data that the processes<br />are using tend to spend most of their time in performing communication between the nodes for the sake of accessing the data. Process migration can be used to migrate a distant process closer to the data that it is processing, thereby ensuring it spends most of its time doing useful work.<br />       2)Resource sharing<br />Nodes which have large amount of resources can act as receiver nodes in a process migration environment.<br />      3) Effective Load Balancing: Migration is particularly important in receiver-initiated distributed systems, where a lightly loaded node announces<br />its availability thereby enabling the arbitrator to provide its processing power to another node which is relatively heavily loaded.<br />     4) Fault tolerance<br /> This aspect of a system is improved<br />by migration of a process from a partially<br />failed node, or in the case of long running<br />processes when different kinds of failures are probable. In conjunction with checkpointing, this goal can be achieved.<br />    5) Eased system administration: When a node is about to be shutdown, the system can migrate processes which are running on it to another node, thereby enabling the process to go to completion, either on the destination node or on<br />the source node by migrating it back.<br />          6) Mobile computing <br />Users may decide to migrate a running process from their workstations to their mobile computers or vice versa to exploit<br />the large amount of resources that workstation can provide.<br />,[object Object],             Load balancing and process migration are used in a network of workstations (NOW) to share the load of<br />heavily loaded processors with more lightly loaded processors. The major reason for load balancing is that there is a lot of distributed processing power that goes unused. Instead of overloading a few machines with a lot of tasks, it makes sense to allocate some of these tasks to under-used machines to increase performance.<br />This allows for tasks to be completed quickly, and possibly in parallel, without affecting the performance of other users on other machines. There are also more subtle motivations for performing process migration. By moving a process, it may<br />be possible to improve communication performance. For example, if a process is accessing a huge remote data source, it may be more efficient to move the process to the data source instead of moving the data to the process. This results in less communication and delay and makes more efficient use of network resources.<br />Process migration could also be used to move communicating processes closer together. Other motivations for process migration include providing higher availability, simplifying reconfiguration,<br />and utilizing special machine capabilities. Higher availability can be achieved by migrating a process,<br />presumably a long-running process, from a machine that is prone to failure or is about to be disconnected from the network. Thus, the process will continue to be available even after machine failure or disconnection. This availability is desirable in reconfiguration where certain processes are required to be continually<br />operational (e.g. name servers). Migration allows the process to be available almost continually, except for migration time, as opposed to restarting the process after the service is moved to another machine. Finally, certain machines may have better capabilities (e.g. fast floating point processing) which can be utilized by<br />moving the process to the machine. In this case, the process can be started on a regular machine and then migrated to a more powerful machine when it becomes available. Process migration may also find application in mobile computing and wide area computing. In mobile computing, it may be desirable to migrate a process to and from the base station supporting the mobile unit. In this way, the process can optimize its performance based on network conditions and the relative processing power of the mobile and base station. Also, as a mobile unit migrates, a process associated with the mobile unit running on a base station can migrate with the mobile unit to its new base station. This avoids communication between the base stations to support the mobile unit’s process. Process migration also has applications in wide area computing, as processes can be migrated to the location of the data that<br />they use. This is important because despite increasing network bandwidth, network delays are not being reduced significantly. Thus, it may be beneficial to move a process to the location of the data to improve performance. In total, there are many motivations for load balancing and process migration. The goal of these<br />techniques is to improve performance by distributing tasks to utilize idle processing power. Process migration can also reduce network transmission and provide high availability. The following sections<br />examine how load balancing and process migration achieve these goals.<br />             <br />,[object Object],         Is the static allocation of tasks to processors to increase throughput and overall processor utilization. Load balancing can be done among individual processors in a parallel machine or among<br />interconnected workstations in a network. In either case, there is an implicit assumption of cooperation between the processors.That is, a processor must be willing to execute a process for any other.<br />The key word in the load balancing definition is static.<br />The most fundamental issue in load balancing is determining a measure of load. There are many different possible measures of load including: number of tasks in run queue, load average, CPU utilization, I/O utilization, amount of free CPU time/memory, etc., or any combination of the above indicators. Other<br />factors to consider when deciding where to place a process include nearness to resources, processor and operating system capabilities, and specialized hardware/software features. The quot;
bestquot;
 load measure as determined by empirical studies is quot;
number of tasks in run queuequot;
. This statistic is readily available<br />using current operating system calls, and it is less time-sensitive than many other measures. Even though  the number of the tasks in the system is a dynamic measure, the number of tasks changes slower than the CPU or I/O utilizations, for example, which may vary within a task execution.<br />The time-sensitivity of an indicator is very important because load balancing is performed with stale information. Each site makes load balancing decisions based on its view of the loads at other sites. <br />There are numerous proposed load balancing algorithms. Some of these algorithms are topology specific, very theoretical (and hence not practical), or overly simplistic.<br />Load balancing ensures that print jobs are        always output as quickly as possible. It does this by evenly distributing the workload so that, for example, if one workflow is busy processing a large-volume print job or if one Server encounters a problem, the print job will automatically be diverted to another workflow. Once a print job starts being processed, it is deleted from the hotfolder, thus ensuring that it is not processed by more than one workflow.<br />,[object Object],                   Is the movement of a currently executing process to a new processor. Process migration allows the system to take advantage of changes in process execution or network utilization during the<br />execution of a process.  A combination<br />of load balancing and process migration is the most beneficial. By providing a process migration facility, a<br />load balancing algorithm has flexibility in its initial allocation as a process can be moved at any time.<br />In addition, process migration must also handle movement of process state, communication migration, and process restart given state information. Also, process migration must be performed quickly to realize an increase in performance.  The process migration latency time is the time from issuing the migration request until the process can execute on the new machine. Excision is the proces of extracting process information from the operating system kernel, and insertion is the process of inserting process information into the operating system kernel.<br />Process migration algorithms must handle process immobility. A process is considered immobile if it should not be moved from its current location. Such processes include I/O processes which interact heavily <br />,[object Object],        <br />       1) Central Server<br />The centralized server locates the appropriate machine when an overload signal is received. Every machine<br />on the network communicates with this module to first establish its identity and then to communicate regarding the load status.<br />      2) Load Balancer<br />The Load Balancer keeps track of the relative loads of the machine on the network and chooses the appropriate<br />machine when an overload signal is received.<br />      3) Checkpointer<br />This Checkpointer sends a signal to the process chosen by the Load Balancer for pre-emption and making it dump a core file. Thereafter, this module traverses the<br />/proc directory to locate the attributes of the process selected for preemption. It then creates a file named filedescriptors which stores details about all the open files used by the process. These are used as checkpointing information<br />     4) Restarter<br />The restarter uses the Linux machine’s ptrace system call to get and set the values of the following attributes of the process<br />     5) File Transferrer<br />The FileTransferrer establishes a UDP connection with the main server and transfers the aforementioned files to that server when the overload message is sent.<br />The following files are sent by this module The core dump The filedescriptors file<br />     6) Load Calculator<br />The Load Calculator is a daemon that runs on every process on the network to calculate the load at regular intervals. The calculation formula is a parametric<br />equation based on various parameters. The total load on the system is calcualed as the sum of the loads of individual process. The memory and the cpu usage are the most significant parameters in the calculation<br />of load. The /proc file system has one directory<br />per process. The directories containrelevantinformation regarding memory and cpu usage for all processes in the system. When the load exceeds a certain threshold value, this module sends an overload signal to the main server. This in turns begins the process of preemption, checkpointing and scheduling.<br /> FIG :-Process Migration and its modules<br />,[object Object],              <br /> Homogeneous process migration involves migrating processes in a homogeneous environment where all systems have the same architecture and operating system but not necessarily the same resources or capabilities.<br />Process migration can be performed either at the user-level or the kernel level. <br />,[object Object],User-level process migration techniques support process migration without changing the operating system kernel. User-level migration implementations are easier to develop and maintain but have two common problems:<br />They cannot access kernel state which means that they cannot migrate all processes.They must cross the kernel/application boundary using kernel requests which are slow and costly.<br />           2) Kernel Level Process Migration<br />Kernel level process migration techniques modify the operating system kernel to make process migration easier and more efficient. Kernel modifications allow the migration process to be done quicker and migrate more types of processes. <br />The Linux kernel is a monolithic kernel i.e. it is one single large program where all the functional components of the kernel have access to all of its internal data structures and routines. The alternative to this is the micro kernel structure where the functional pieces of the kernel are broken out into units with strict communication mechanism between them. This makes adding new components into the kernel, via the configuration process, rather time consuming. The best and the robust alternative is the ability to dynamically load and unload the components of the operating system using Linux Kernel Modules. <br />The Linux kernel modules are piece of codes, which can be dynamically linked to the kernel (according to the need), even after the system bootup. They can be unlinked from the kernel and removed when they are no longer needed. Mostly the Linux kernel modules are used for device drivers or pseudo-device drivers such as network drivers or file system. When a Linux kernel module is loaded, it becomes a part of the Linux kernel as the normal kernel code and functionality and it posses the same rights and responsibilities as the kernel code.<br />A kernel module is not an independant executable, but an object file which will be linked into the kernel in runtime. As a result, they should be compiled with the -c flag.<br />Loading a module into the kernel involves four major tasks:<br />     1)  Preparing the module in user space. Read the object file from disk and resolve all undefined symbols.<br />     2) Allocating memory in the kernel address space to hold the module text, data, and other relevant information.<br />3) Copying the module code into this newly allocated space and provide the kernel with any information necessary to maintain the module (such as the new module's own symbol table).<br />     4) Executing the module initialization routine, init_module() (now in the kernel).<br />The Linux module loader, insmod, is responsible for reading an object file to be used as a module, obtaining the list of kernel symbols, resolving all references in the module with these symbols, and sending the module to the kernel for execution.<br />While insmod has many features, it is most commonly invoked as insmod module.o, where module.o is an object file corresponding to a module.<br />,[object Object],       ELF (Executable and Linking Format) is file formatthat defines how an object file is composed and organized. With thisinformation, your kernel and the binary loader know how to loadthe file, where to look for the code, where to look the initializeddata, which shared library that needs to be loaded and so on.<br />                   <br />                                <br />An ELF file has two views: The program header shows the segments used at run-time, whereas the section header lists the set of sections of the binary.<br />Each ELF file is made up of one ELF header, followed by file data. The file data can include:<br />Program header table, describing zero or more segments<br />Section header table, describing zero or more sections<br />Data referred to by entries in the program header table or section header table<br />The segments contain information that is necessary for runtime execution of the file, while sections contain important data for linking and relocation. Each byte in the entire file is taken by no more than one section at a time, but there can be orphan bytes, which are not covered by a section. In the normal case of a Unix executable one or more sections are enclosed in one segment.<br />Some object file control structures can grow, because the ELF header contains their actual sizes. If the<br />object file format changes, a program may encounter control structures that are larger or Smaller<br />          #defineEINIDENT16<br />Typedefstruct<br />{<br />unsignedchar e_ident[EINIDENT];                                                Elf32_Half            e_type;                                             <br />Elf32_Half            e_machine;                                             <br />Elf32_Word           e_version                                                                                        Elf32_Addr           e_entry;<br />           Elf32_Off             e_phoff;                                                                                          <br />Elf32_Off             e_shoff;                                                           <br />Elf32_Word          e_flags;                                                       <br />Elf32_Half            e _ehsize;                                                       <br />Elf32_Half            e_phentsize;                                                        <br />Elf32_Half            e_ phnum;                                                        <br />Elf32_Half            e_shentsize;                                                     <br />Elf2_Half              e_shnum ;                                                       <br />Elf32_Half            e_shstrndx ;                                                <br />} <br />Elf32Ehdr; <br />e_ident The initial bytes mark the file as an object file and provide machine-independent data with which to decode and  <br />e_type This member identifies the object file type.<br />Name          Value         Meaning<br />ET_NONE      0           No file type<br />ET_REL         1           Relocatable file<br />ET_EXEC      2           Executable file<br />ET_DYN         3           Shared object file<br />ET_CORE      4           Core file<br />ET_LOPROC 0xff00    Processor-specific<br />ET_HIPROC  0xffff       Processor-specific <br />e_Machine  This member’s value specifies the required architecture for an individual file.<br />e_version This member identifies the object file version.<br />Name                Value             Meaning<br />EV_NONE           0                  Invalid<br />EV_CURRENT  1 Currentversion  <br />e_entry This member gives the virtual address to which the system first transfers control, thus starting the process. <br />e_phoff This member holds the program header table’s file offset in bytes. If the file has no program header table, this member holds zero.<br />e_shoff This member holds the section header table’s file offset in bytes. If the file has no section<br />header table, this member holds zero.<br />e_flags This member holds processor-specific flags associated with the file. Flag names take the form EF_machine_flag.<br />e_ehsize This member holds the ELF header’s size in bytes.<br />e_phentsize This member holds the size in bytes of one entry in the file’s program header table; all entries are the same size.<br /> e_phnum This member holds the number of entries in the program header table.<br />e_shentsize This member holds a section header’s size in bytes. A section header is one entry in the section header table; all entries are the same size.<br />e_shnum This member holds the number of entries in the section header table. <br />e_shstrndx This member holds the section header table index of the entry associated with the section name string table.<br />,[object Object],          <br />Thealgorithm is as follows:<br /> 1) Old host suspends the migrating process.<br /> 2)  Old host marshalls execution state and process control state and transmits it to the new host.<br /> 3) New host initializes empty process object and responds with an accept or reject.<br />  4) Simultaneously and right after its initial transmission, the old host determines the current code, stack,<br />and heap pages and transmits them.<br />  5) When the new host receives the first code, stack, and heap (optional) page it resumes the migrating<br />process.<br />  6) Concurrently, the old host ships remaining stack pages, followed by communication links and file<br />descriptor information.<br />  7) The old host then flushes dirty heap and file cache pages. (This can be done in parallel with transfers<br />to new host if there is a separate communication link.)<br />  8) After all data is transferred off the old host, it sends a flush complete message to the new host.<br />,[object Object],1. A migration request is issued to a remote node.After negotiation, migration has been accepted.<br />2. A process is detached from its source node by suspending its execution, declaring it to be in a migrating<br />state, and temporarily redirecting communication as<br />described in the following step.<br />3.Communication is temporarily redirected by queuing up arriving messages directed to the migrated process, and by delivering them to the process after migration. This step continues in parallel with steps 4,<br />5, and 6, as long as there are additional incoming messages. Once the communication channels are enabled after migration (as a result of step 7), the<br />migrated process is known to the external world.<br />4.The process state is extracted, including memorycontents; processor state (register contents); communication state (e.g., opened files and message channels);<br />and relevant kernel context. The<br />communication state and kernel context are OSdependent. Some of the local OS internal state is not transferable. The process state is typically retained on<br />the source node until the end of migration, and in some systems it remains there even after migration completes. Processor dependencies, such as register and stack contents, have to be eliminated in the case<br />of heterogeneous migration.<br />5. A destination process instance is created into which the transferred state will be imported. A destination instance is not activated until a sufficient amount of<br />state has been transferred from the source process<br />instance. After that, the destination instance will be promoted into a regular process.<br />6. State is transferred and imported into a new instance on the remote node. Not all of the state needs to be transferred; some of the state could be lazily brought over after migration is completed.<br />7. Some means of forwarding references to the migrated process must be maintained. This is required in order to communicate with the process or to control it. It can be achieved by registering the current location at the home node (e.g. in Sprite), by searching for the migrated process (e.g. in the V Kernel, at the communication<br />protocol level), or by forwarding messages<br />across all visited nodes (e.g. in Charlotte). This step also enables migrated communication channels at the destination and it ends step 3 as communication<br />is permanently redirected.<br />8.The new instance is resumed when sufficient state has been transferred and imported. With this step, process migration completes. Once all of the state has<br />been transferred from the original instance, it may be deleted on the source node.<br />,[object Object],Low cost: You don’t need to spend time and money to obtain licenses since Linux and much of its software come with the GNU General Public License. You can start to work immediately without worrying that your software may stop working anytime because the free trial version expires. Additionally, there are large repositories from which you can freely download high quality software for almost any task you can think of. <br />Stability: Linux doesn’t need to be rebooted periodically to maintain performance levels. It doesn’t freeze up or slow down over time due to memory leaks and such. Continuous up-times of hundreds of days (up to a year or more) are not uncommon. <br />Performance: Linux provides persistent high performance on workstations and on networks. It can handle unusually large numbers of users simultaneously, and can make old computers sufficiently responsive to be useful again. <br />Network friendliness: Linux was developed by a group of programmers over the Internet and has therefore strong support for network functionality; client and server systems can be easily set up on any computer running Linux. It can perform tasks such as network backups faster and more reliably than alternative systems. <br />Flexibility: Linux can be used for high performance server applications, desktop applications, and embedded systems. You can save disk space by only installing the components needed for a particular use. You can restrict the use of specific computers by installing for example only selected office applications instead of the whole suite. <br />Compatibility: It runs all common Unix software packages and can process all common file formats. <br />Choice: The large number of Linux distributions gives you a choice. Each distribution is developed and supported by a different organization. You can pick the one you like best; the core functionalities are the same; most software runs on most distributions. <br />Fast and easy installation: Most Linux distributions come with user-friendly installation and setup programs. Popular Linux distributions come with tools that make installation of additional software very user friendly as well.<br />Full use of hard disk: Linux continues work well even when the hard disk is almost full. <br />Multitasking: Linux is designed to do many things at the same time; e.g., a large printing job in the background won’t slow down your other work. <br />Security: Linux is one of the most secure operating systems. “Walls” and flexible file access permission systems prevent access by unwanted visitors or viruses. Linux users have to option to select and safely download software, free of charge, from online repositories containing thousands of high quality packages. No purchase transactions requiring credit card numbers or other sensitive personal information are necessary.<br />Open Source: If you develop software that requires knowledge or modification of the operating system code, Linux’s source code is at your fingertips. Most Linux applications are Open Source as well. <br />,[object Object],1)Dynamic Load Distribution, by migrating processes from overloaded nodes to less loaded ones.<br />2)Fault Resilience, by migrating processes from nodesthat may have experienced a partial failure.<br />3)Improved system administration, by migrating processes from the nodes that are about to be shut down or otherwise made unavailable<br />,[object Object]
   1) Distributed applications can be started on certain nodes and can be migrated at the application level or by using a system wide migration facility in response to things like <br />load balancing. <br />2)Long running applications, which can run for days or weeks on end can suffer various interruption.<br />3)Multiuser Applications, can benefit greatly from process migration. As users come and go, the load on individual nodes varies widely.<br />,[object Object], <br />              Black, D., Golub, D., Julin, D., Rashid, R., Draves, R., Dean,<br />R., Forin, A., Barrera, J., Tokuda, H., Malan, G., and Bohman,<br />D. (April 1992). Microkernel Operating System Architecture<br />and Mach. Proceedings of the USENIX<br />Workshop on Micro-Kernels and Other Kernel Architectures<br />Goldberg, A. and Jefferson, D. (1987). Transparent Process Cloning: A Tool for Load Management of Distributed Systems.<br />Proceedings of the 8th International Conference on<br />Parallel Processing<br />Zhou, S. and Ferrari, D. (September 1987). An Experimental<br />Study of Load Balancing Performance. Proceedings of the<br />7th IEEE International Conference on Distributed Com49<br />puting Systems,<br />Google.com<br />Linuxgenerals.com<br />Linuxpdf.com<br />
Final jaypaper linux
Final jaypaper linux
Final jaypaper linux
Final jaypaper linux
Final jaypaper linux
Final jaypaper linux

Más contenido relacionado

La actualidad más candente

Dynamic load balancing in distributed systems in the presence of delays a re...
Dynamic load balancing in distributed systems in the presence of delays  a re...Dynamic load balancing in distributed systems in the presence of delays  a re...
Dynamic load balancing in distributed systems in the presence of delays a re...
Mumbai Academisc
 
Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows Vista
Trinh Phuc Tho
 
Processor Allocation (Distributed computing)
Processor Allocation (Distributed computing)Processor Allocation (Distributed computing)
Processor Allocation (Distributed computing)
Sri Prasanna
 
Linux process management
Linux process managementLinux process management
Linux process management
Raghu nath
 
Chapter 14 replication
Chapter 14 replicationChapter 14 replication
Chapter 14 replication
AbDul ThaYyal
 
Ch4 OS
Ch4 OSCh4 OS
Ch4 OS
C.U
 

La actualidad más candente (20)

Dynamic load balancing in distributed systems in the presence of delays a re...
Dynamic load balancing in distributed systems in the presence of delays  a re...Dynamic load balancing in distributed systems in the presence of delays  a re...
Dynamic load balancing in distributed systems in the presence of delays a re...
 
CS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMSCS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMS
 
File replication
File replicationFile replication
File replication
 
Processes and Threads in Windows Vista
Processes and Threads in Windows VistaProcesses and Threads in Windows Vista
Processes and Threads in Windows Vista
 
Processor allocation in Distributed Systems
Processor allocation in Distributed SystemsProcessor allocation in Distributed Systems
Processor allocation in Distributed Systems
 
mmWeb
mmWebmmWeb
mmWeb
 
Processor Allocation (Distributed computing)
Processor Allocation (Distributed computing)Processor Allocation (Distributed computing)
Processor Allocation (Distributed computing)
 
Linux process management
Linux process managementLinux process management
Linux process management
 
Process coordination
Process coordinationProcess coordination
Process coordination
 
4 026
4 0264 026
4 026
 
Load balancing in Distributed Systems
Load balancing in Distributed SystemsLoad balancing in Distributed Systems
Load balancing in Distributed Systems
 
Operating system support in distributed system
Operating system support in distributed systemOperating system support in distributed system
Operating system support in distributed system
 
Cluster based storage - Nasd and Google file system - advanced operating syst...
Cluster based storage - Nasd and Google file system - advanced operating syst...Cluster based storage - Nasd and Google file system - advanced operating syst...
Cluster based storage - Nasd and Google file system - advanced operating syst...
 
Chapter 14 replication
Chapter 14 replicationChapter 14 replication
Chapter 14 replication
 
process creation OS
process creation OSprocess creation OS
process creation OS
 
Multi processor scheduling
Multi  processor schedulingMulti  processor scheduling
Multi processor scheduling
 
Process
ProcessProcess
Process
 
Ch4 OS
Ch4 OSCh4 OS
Ch4 OS
 
OS_Ch4
OS_Ch4OS_Ch4
OS_Ch4
 
Bt0070
Bt0070Bt0070
Bt0070
 

Similar a Final jaypaper linux

The Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent NetworkThe Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent Network
IJAEMSJORNAL
 
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
IDES Editor
 

Similar a Final jaypaper linux (20)

Basic features of distributed system
Basic features of distributed systemBasic features of distributed system
Basic features of distributed system
 
The Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent NetworkThe Concept of Load Balancing Server in Secured and Intelligent Network
The Concept of Load Balancing Server in Secured and Intelligent Network
 
Improve the Offloading Decision by Adaptive Partitioning of Task for Mobile C...
Improve the Offloading Decision by Adaptive Partitioning of Task for Mobile C...Improve the Offloading Decision by Adaptive Partitioning of Task for Mobile C...
Improve the Offloading Decision by Adaptive Partitioning of Task for Mobile C...
 
Dynamic Cloud Partitioning and Load Balancing in Cloud
Dynamic Cloud Partitioning and Load Balancing in Cloud Dynamic Cloud Partitioning and Load Balancing in Cloud
Dynamic Cloud Partitioning and Load Balancing in Cloud
 
N1803048386
N1803048386N1803048386
N1803048386
 
Live migration
Live migrationLive migration
Live migration
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Data
 
Data Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big DataData Distribution Handling on Cloud for Deployment of Big Data
Data Distribution Handling on Cloud for Deployment of Big Data
 
G216063
G216063G216063
G216063
 
2012an20
2012an202012an20
2012an20
 
Virtual Machine Migration Techniques in Cloud Environment: A Survey
Virtual Machine Migration Techniques in Cloud Environment: A SurveyVirtual Machine Migration Techniques in Cloud Environment: A Survey
Virtual Machine Migration Techniques in Cloud Environment: A Survey
 
Modified Active Monitoring Load Balancing with Cloud Computing
Modified Active Monitoring Load Balancing with Cloud ComputingModified Active Monitoring Load Balancing with Cloud Computing
Modified Active Monitoring Load Balancing with Cloud Computing
 
20120140506012
2012014050601220120140506012
20120140506012
 
20120140506012
2012014050601220120140506012
20120140506012
 
Process migration and load balancing
Process migration and load balancingProcess migration and load balancing
Process migration and load balancing
 
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...
 
[IJET V2I5P18] Authors:Pooja Mangla, Dr. Sandip Kumar Goyal
[IJET V2I5P18] Authors:Pooja Mangla, Dr. Sandip Kumar Goyal[IJET V2I5P18] Authors:Pooja Mangla, Dr. Sandip Kumar Goyal
[IJET V2I5P18] Authors:Pooja Mangla, Dr. Sandip Kumar Goyal
 
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...
 
J017367075
J017367075J017367075
J017367075
 
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
Synthesis of Non-Replicated Dynamic Fragment Allocation Algorithm in Distribu...
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 

Final jaypaper linux

  • 1.
  • 2.
  • 3. The purpose of this paper is to understand how to develop the project to share the load on one system by process sharing using linux platform.If one system is executing some task and if more load is there on the system then using load sharing we can share the load
  • 4.
  • 5.