24. pcwebopedia.com : Unlike conventional networks that focus on communication among devices, grid computing harnesses unused processing cycles of all computers in a network for solving problems too intensive for any stand-alone machine.
25. IBM: Grid computing enables the virtualization of distributed computing and data resources such as processing, network bandwidth and storage capacity to create a single system image, granting users and applications seamless access to vast IT capabilities. Just as an Internet user views a unified instance of content via the Web, a grid user essentially sees a single, large virtual computer.
26. Sun: Grid Computing is a computing infrastructure that provides dependable, consistent, pervasive and inexpensive access to computational capabilities.
42. This allows the computer's control circuitry to issue instructions at the processing rate of the slowest step, which is much faster than the time needed to perform all steps at once.
43. A non-pipeline architecture is inefficient because some CPU components (modules) are idle while another module is active during the instruction cycle
44. Processors with pipelining are organized inside into stages which can semi-independently work on separate jobs
45.
46.
47. Typical Computing Elements Hardware Operating System Applications Programming paradigms P P P P P P Microkernel Multi-Processor Computing System Threads Interface Process Processor Thread P
63. SIMD Architecture Ex: CRAY machine vector processing, Intel MMX (multimedia support) C i <= A i * B i Instruction Stream Processor A Processor B Processor C Data Input stream A Data Input stream B Data Input stream C Data Output stream A Data Output stream B Data Output stream C
64. Unlike SISD, MISD, MIMD computer works asynchronously. Shared memory (tightly coupled) MIMD Distributed memory (loosely coupled) MIMD MIMD Architecture Processor A Processor B Processor C Data Input stream A Data Input stream B Data Input stream C Data Output stream A Data Output stream B Data Output stream C Instruction Stream A Instruction Stream B Instruction Stream C
70. Highly reliable (any CPU failure does not affect the whole system) Processor A Processor B Processor C IPC channel IPC channel MEMORY BUS MEMORY BUS MEMORY BUS Memory System A Memory System B Memory System C
84. OS = Microkernel + User Subsystems Client Application Thread lib. File Server Network Server Display Server Microkernel Hardware Send Reply Ex: Mach, PARAS, Chorus, etc. User Kernel
174. In each superstep a processor can work on local data and send messages.
175. At the end of the superstep, a barrier synchronization takes place and all processors receive the messages which were sent in the previous superstep
176.
177. Book: Rob H. Bisseling, “Parallel Scientific Computation: A Structured Approach using BSP and MPI,” Oxford University Press, 2004, 324 pages, ISBN 0-19-852939-2.
270. Memory ushering: migrate processes from a node that nearly exhausted its free memory, to prevent paging
271. Parallel File I/O: bring the process to the file-server, direct file I/O from migrated processes
272.
273. Example: Disk access from diskless nodes on fileserver is completely transparent to programs
274.
275.
276. user context ( remote ) that can be migrated on a diskless node;
277.
278. Connected by an exclusive link for both synchronous (system calls) and asynchronous (signals, MOSIX events)
279. Process context (code, stack, data) - site independent - may migrate Deputy Remote Kernel Kernel Userland Userland openMOSIX Link Local master node diskless node
280.
281. responds to variations in the load of the nodes, runtime characteristics of the processes, number of nodes and their speeds
338. etc. Client Programs used the classes in the Client Stub files to send messages to the Servant objects Client Program Servant Object Servant Objects inherit from classes in the Server Skeleton files to receive messages from the Client programs Association Inheritance
372. Gridware can be viewed as a special type of middleware that enable sharing and manage grid components based on user requirements and resource attributes (e.g., capacity, performance, availability…)
376. Many others: Cluster Computing, Network Computing, Client/Server Computing, Internet Computing, etc...
377.
378. In general, the answer is “no.” Distributed Computing is most often concerned with distributing the load of a program across two or more processes.
379.
380. Computers can act as clients or servers depending on what role is most efficient for the network.
402. Focuses on execution environments for integrating widely-distributed computational platforms, data resources, displays, special instruments and so forth.