SlideShare una empresa de Scribd logo
1 de 27
Parallel Computing


                      Submitted by:
              •Virendra Singh Yadav
•
Overview
 What is Parallel Computing
 Why Use Parallel Computing
 Concepts and Terminology
 Parallel Computer Memory Architectures
 Parallel Programming Models
 Examples
 Conclusion
What is Parallel Computing?

 Parallel computing is the simultaneous use of multiple
 compute resources to solve a computational problem.
Why Use Parallel Computing?
 Saves time

 Solve larger problems

 Cost savings

 Provide concurrency
Concepts and Terminology:
           Types Of Parallelism

   Data Parallelism



   Task Parallelism
Concepts and Terminology:
    Flynn’s Classical Taxonomy
 Distinguishes multi-processor architecture by
 instruction and data

 SISD – Single Instruction, Single Data

 SIMD – Single Instruction, Multiple Data

 MISD – Multiple Instruction, Single Data

 MIMD – Multiple Instruction, Multiple Data
Flynn’s Classical Taxonomy:
            SISD
               Serial

               Only one instruction
               and data stream is
               acted on during any
               one clock cycle
Flynn’s Classical Taxonomy:
           SIMD
               All processing units
                execute the same
                instruction at any given
                clock cycle.
               Each processing unit
                operates on a different
                data element.
Flynn’s Classical Taxonomy:
               MISD
 Different instructions operated on a single data
 element.

 Example: Multiple cryptography algorithms
 attempting to crack a single coded message.
Flynn’s Classical Taxonomy:
           MIMD
               Can execute different
                instructions on
                different data
                elements.
               Most common type of
                parallel computer.
Parallel Computer Memory Architectures:
        Shared Memory Architecture
 All processors access
  all memory as a single
  global address space.
 Data sharing is fast.
 Lack of scalability
  between memory and
  CPUs
Parallel Computer Memory Architectures:
            Distributed Memory
 Each processor has its
  own memory.
 Is scalable, no
  overhead for cache
  coherency.
 Programmer is
  responsible for many
  details of
  communication
  between processors.
Parallel Programming Models
  Shared Memory Model


  Messaging Passing Model


  Data Parallel Model
Parallel Programming Models:
Shared Memory Model
 In the shared-memory programming model, tasks
 share a common address space, which they read and
 write asynchronously.
 Locks may be used to control shared memory access.

 Program development can be simplified since there is
 no need to explicitly specify communication between
 tasks.
Parallel Programming Models:
   Message Passing Model
Parallel Programming Models:
     Data Parallel Model
Designing Parallel Programs
 Patitioning -
         Domain Decomposition
      •   Functional Decomposition


• Communication


• Synchronization
Partition :
            Domain Decomposition
 Each task handles a portion of the data set.
Partition
          Functional Decomposition
 Each task performs a function of the overall work
Designing Parallel Programs
              Communication

 Synchronous communications are often referred to as
 blocking communications since other work must wait
 until the communications have completed.

 Asynchronous communications allow tasks to transfer
 data independently from one another.
Designing Parallel Programs
                 Synchronization
Types of Synchronization:

 Barrier

 Lock / semaphore

 Synchronous communication operations
Example:
 As a simple example, if we are running code on a 2-
 processor system (CPUs "a" & "b") in a parallel
 environment and we wish to do tasks "A" and "B" , it is
 possible to tell CPU "a" to do task "A" and CPU "b" to
 do task 'B" simultaneously, thereby reducing the
 runtime of the execution.
Example:
Array Processing
 Serial Solution
    Perform a function on a 2D array.
    Single processor iterates through each element in the
     array
 Possible Parallel Solution
    Assign each processor a partition of the array.
    Each process iterates through its own partition.
Conclusion
 Parallel computing is fast.
 There are many different approaches and models of
  parallel computing.
References
 https://computing.llnl.gov/tutorials/parallel_comp
 Introduction to Parallel
  Computing, www.llnl.gov/computing/tutorials/paralle
  l_comp/#Whatis
 www.cs.berkeley.edu/~yelick/cs267-
  sp04/lectures/01/lect01-intro

 http://www-users.cs.umn.edu/~karypis/parbook/
Thank You

Más contenido relacionado

La actualidad más candente

Distributed & parallel system
Distributed & parallel systemDistributed & parallel system
Distributed & parallel system
Manish Singh
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
Ashish Kumar
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
karan2190
 
Distributed computing
Distributed computingDistributed computing
Distributed computing
shivli0769
 

La actualidad más candente (20)

advanced computer architesture-conditions of parallelism
advanced computer architesture-conditions of parallelismadvanced computer architesture-conditions of parallelism
advanced computer architesture-conditions of parallelism
 
Computer architecture multi processor
Computer architecture multi processorComputer architecture multi processor
Computer architecture multi processor
 
Introduction to Parallel Computing
Introduction to Parallel ComputingIntroduction to Parallel Computing
Introduction to Parallel Computing
 
Memory management
Memory managementMemory management
Memory management
 
Distributed & parallel system
Distributed & parallel systemDistributed & parallel system
Distributed & parallel system
 
Distributed Computing system
Distributed Computing system Distributed Computing system
Distributed Computing system
 
Sequential consistency model
Sequential consistency modelSequential consistency model
Sequential consistency model
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
 
message passing vs shared memory
message passing vs shared memorymessage passing vs shared memory
message passing vs shared memory
 
Distributed shred memory architecture
Distributed shred memory architectureDistributed shred memory architecture
Distributed shred memory architecture
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 
Distributed Computing
Distributed ComputingDistributed Computing
Distributed Computing
 
Applications of paralleL processing
Applications of paralleL processingApplications of paralleL processing
Applications of paralleL processing
 
Dichotomy of parallel computing platforms
Dichotomy of parallel computing platformsDichotomy of parallel computing platforms
Dichotomy of parallel computing platforms
 
Parallel computing
Parallel computingParallel computing
Parallel computing
 
Implementation levels of virtualization
Implementation levels of virtualizationImplementation levels of virtualization
Implementation levels of virtualization
 
Introduction to Parallel Computing
Introduction to Parallel ComputingIntroduction to Parallel Computing
Introduction to Parallel Computing
 
Unit 1 architecture of distributed systems
Unit 1 architecture of distributed systemsUnit 1 architecture of distributed systems
Unit 1 architecture of distributed systems
 
Distributed computing
Distributed computingDistributed computing
Distributed computing
 
Course outline of parallel and distributed computing
Course outline of parallel and distributed computingCourse outline of parallel and distributed computing
Course outline of parallel and distributed computing
 

Similar a Parallel computing

Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
Ismail El Gayar
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
Shaveta Banda
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
Mr SMAK
 
Week # 1.pdf
Week # 1.pdfWeek # 1.pdf
Week # 1.pdf
giddy5
 
Computing notes
Computing notesComputing notes
Computing notes
thenraju24
 

Similar a Parallel computing (20)

Parallel architecture &programming
Parallel architecture &programmingParallel architecture &programming
Parallel architecture &programming
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Multiprocessor
MultiprocessorMultiprocessor
Multiprocessor
 
Week # 1.pdf
Week # 1.pdfWeek # 1.pdf
Week # 1.pdf
 
parallel programming models
 parallel programming models parallel programming models
parallel programming models
 
Chapter04 new
Chapter04 newChapter04 new
Chapter04 new
 
intro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptxintro, definitions, basic laws+.pptx
intro, definitions, basic laws+.pptx
 
Unit5
Unit5Unit5
Unit5
 
Parallel Processing
Parallel ProcessingParallel Processing
Parallel Processing
 
Module 2.pdf
Module 2.pdfModule 2.pdf
Module 2.pdf
 
distributed system lab materials about ad
distributed system lab materials about addistributed system lab materials about ad
distributed system lab materials about ad
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
 
Computing notes
Computing notesComputing notes
Computing notes
 
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
01 introduction fundamentals_of_parallelism_and_code_optimization-www.astek.ir
 
Parallelization using open mp
Parallelization using open mpParallelization using open mp
Parallelization using open mp
 
2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester2 parallel processing presentation ph d 1st semester
2 parallel processing presentation ph d 1st semester
 
Parallel Computing
Parallel Computing Parallel Computing
Parallel Computing
 
Lec 2 (parallel design and programming)
Lec 2 (parallel design and programming)Lec 2 (parallel design and programming)
Lec 2 (parallel design and programming)
 
System on chip architectures
System on chip architecturesSystem on chip architectures
System on chip architectures
 

Parallel computing

  • 1. Parallel Computing Submitted by: •Virendra Singh Yadav •
  • 2. Overview  What is Parallel Computing  Why Use Parallel Computing  Concepts and Terminology  Parallel Computer Memory Architectures  Parallel Programming Models  Examples  Conclusion
  • 3. What is Parallel Computing?  Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem.
  • 4.
  • 5. Why Use Parallel Computing?  Saves time  Solve larger problems  Cost savings  Provide concurrency
  • 6. Concepts and Terminology: Types Of Parallelism  Data Parallelism  Task Parallelism
  • 7. Concepts and Terminology: Flynn’s Classical Taxonomy  Distinguishes multi-processor architecture by instruction and data  SISD – Single Instruction, Single Data  SIMD – Single Instruction, Multiple Data  MISD – Multiple Instruction, Single Data  MIMD – Multiple Instruction, Multiple Data
  • 8. Flynn’s Classical Taxonomy: SISD  Serial  Only one instruction and data stream is acted on during any one clock cycle
  • 9. Flynn’s Classical Taxonomy: SIMD  All processing units execute the same instruction at any given clock cycle.  Each processing unit operates on a different data element.
  • 10. Flynn’s Classical Taxonomy: MISD  Different instructions operated on a single data element.  Example: Multiple cryptography algorithms attempting to crack a single coded message.
  • 11. Flynn’s Classical Taxonomy: MIMD  Can execute different instructions on different data elements.  Most common type of parallel computer.
  • 12. Parallel Computer Memory Architectures: Shared Memory Architecture  All processors access all memory as a single global address space.  Data sharing is fast.  Lack of scalability between memory and CPUs
  • 13. Parallel Computer Memory Architectures: Distributed Memory  Each processor has its own memory.  Is scalable, no overhead for cache coherency.  Programmer is responsible for many details of communication between processors.
  • 14. Parallel Programming Models  Shared Memory Model  Messaging Passing Model  Data Parallel Model
  • 15. Parallel Programming Models: Shared Memory Model  In the shared-memory programming model, tasks share a common address space, which they read and write asynchronously.  Locks may be used to control shared memory access.  Program development can be simplified since there is no need to explicitly specify communication between tasks.
  • 16. Parallel Programming Models: Message Passing Model
  • 17. Parallel Programming Models: Data Parallel Model
  • 18. Designing Parallel Programs  Patitioning -  Domain Decomposition • Functional Decomposition • Communication • Synchronization
  • 19. Partition : Domain Decomposition  Each task handles a portion of the data set.
  • 20. Partition Functional Decomposition  Each task performs a function of the overall work
  • 21. Designing Parallel Programs Communication  Synchronous communications are often referred to as blocking communications since other work must wait until the communications have completed.  Asynchronous communications allow tasks to transfer data independently from one another.
  • 22. Designing Parallel Programs Synchronization Types of Synchronization:  Barrier  Lock / semaphore  Synchronous communication operations
  • 23. Example:  As a simple example, if we are running code on a 2- processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution.
  • 24. Example: Array Processing  Serial Solution  Perform a function on a 2D array.  Single processor iterates through each element in the array  Possible Parallel Solution  Assign each processor a partition of the array.  Each process iterates through its own partition.
  • 25. Conclusion  Parallel computing is fast.  There are many different approaches and models of parallel computing.
  • 26. References  https://computing.llnl.gov/tutorials/parallel_comp  Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/paralle l_comp/#Whatis  www.cs.berkeley.edu/~yelick/cs267- sp04/lectures/01/lect01-intro  http://www-users.cs.umn.edu/~karypis/parbook/