SlideShare una empresa de Scribd logo
1 de 12
CUDA
By: Areeb Ahmed Khan
Topics:
• What is CUDA?
• What is FLOPS?
• CUDA C.
• Processing flow of CUDA.
• What is GPGPU?
• An Overview of Parallel Computing.
• Serial Computing VS Parallel Computing.
• World’s Most Powerful GPU.
What is CUDA?
CUDA stands for “Compute Unified Device Architecture”.
It is a Parallel computing platform developed by NVIDIA (Graphics Card
Manufacturer) in 2006 and implemented in their GPUs. CUDA gives developers,
access to the instruction set and memory of the parallel computation elements in
GPUs.
CUDA enabled GPUs can be used for General Purpose Processing (GPGPU).
CUDA Platform:
The CUDA platform is accessible to programmers by several CUDA accelerated
libraries and by several extended standard programming languages like:
• C extended to CUDA C.
• C++ extended to CUDA C++.
• Fortran extended to CUDA Fortran
NVIDIA CUDA™ technology is the world’s only C language environment that enables
programmers and developers to write software to solve complex computational
problems in a fraction of the time by tapping into the many-core parallel processing
power of GPU's.
CUDA C:
The user writes a C code, while the compiler divides the code into two portions.
One portion is delivered to CPU (because CPU is best for such tasks), while the
other portion, involving extensive calculations (FLOPS), is delivered to the GPU(s),
that executes the code in parallel. Because C is a familiar programming language,
CUDA results in very steep learning curve and hence it is becoming a favorite tool
for accelerating various applications.
FLOPS:
FLOPS an acronym for “FLoating - point Operations Per Second” is a unit to measure
computer performance, useful in fields of scientific calculations that make heavy use of
floating-point calculations. For such cases it is a more accurate measure than the generic
instructions per second.
CUDA C Sample Code:
#include < stdio.h >
#include < cuda.h >
__global__ void kernel_function(int a, int b )
{
printf(“ The value is %d”,a*b);
}
int main( void )
{
kernel_function<<<1,1>>>(5,2);
return 0;
}
Processing Flow of CUDA:
• CPU copies data from main
memory to GPU memory.
• CPU instructs the processes to
GPU.
• GPU executes the processes
parallel in each core.
• CPU copies the result from GPU
memory to main memory.
What is GPGPU?
GPGPU stands for “General purpose Graphics processing unit” . It is an approach
which describes the use of a GPU for those computations in applications which are
generally handled by the Central processing Unit (CPU). As in CPU, there are
several no. of cores working parallel where as in a GPU there are hundred of cores
working in parallel which could be very useful if we utilize them in general
computations. It is also called GPU Accelerated Computing.
Parallel Computing:
It is a form of computation in which many calculations are carried out
simultaneously which results in quick completion of calculations.
All the GPUs works on parallel computing whereas in CPUs , parallel computing is in
the form of multiple cores as they can entertain many processes simultaneously for
an time efficient solution.
Serial Computing VS Parallel Computing
Serial Computing:
• To be run on a single computer having a single Central Processing Unit (CPU);
• A problem is broken into a discrete series of instructions.
• Instructions are executed one after another.
• Only one instruction may execute at any moment in time.
Parallel Computing:
• Problem is broken into discrete parts that can be solved concurrently
• Each part is further broken down to a series of instructions
• Instructions from each part execute simultaneously on different processors
• An overall control/coordination mechanism.
World’s Most Powerful GPU
THANK YOU !

Más contenido relacionado

La actualidad más candente

PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...
PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...
PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...AMD Developer Central
 
Dad i want a supercomputer on my next
Dad i want a supercomputer on my nextDad i want a supercomputer on my next
Dad i want a supercomputer on my nextAkash Sahoo
 
Introduction to HSA
Introduction to HSAIntroduction to HSA
Introduction to HSA秉儒 吳
 
Gpu acceleration for simulating massively parallel many core platforms
Gpu acceleration for simulating massively parallel many core platformsGpu acceleration for simulating massively parallel many core platforms
Gpu acceleration for simulating massively parallel many core platformsWMLab,NCU
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
Difference between parallel and sequential computing
Difference between parallel and sequential computingDifference between parallel and sequential computing
Difference between parallel and sequential computingAyeshaGul25
 
How to Meet Your P99 Goal While Overcommitting Another Workload
How to Meet Your P99 Goal While Overcommitting Another WorkloadHow to Meet Your P99 Goal While Overcommitting Another Workload
How to Meet Your P99 Goal While Overcommitting Another WorkloadScyllaDB
 
High-Performance Computing with C++
High-Performance Computing with C++High-Performance Computing with C++
High-Performance Computing with C++JetBrains
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing UnitKamran Ashraf
 
GPUs vs CPUs for Parallel Processing
GPUs vs CPUs for Parallel ProcessingGPUs vs CPUs for Parallel Processing
GPUs vs CPUs for Parallel ProcessingMohammed Billoo
 
GRAPHICS PROCESSING UNIT (GPU)
GRAPHICS PROCESSING UNIT (GPU)GRAPHICS PROCESSING UNIT (GPU)
GRAPHICS PROCESSING UNIT (GPU)self employed
 
19564926 graphics-processing-unit
19564926 graphics-processing-unit19564926 graphics-processing-unit
19564926 graphics-processing-unitDayakar Siddula
 
DB Latency Using DRAM + PMem in App Direct & Memory Modes
DB Latency Using DRAM + PMem in App Direct & Memory ModesDB Latency Using DRAM + PMem in App Direct & Memory Modes
DB Latency Using DRAM + PMem in App Direct & Memory ModesScyllaDB
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentationJosiah Lund
 

La actualidad más candente (20)

Introduction to GPU Programming
Introduction to GPU ProgrammingIntroduction to GPU Programming
Introduction to GPU Programming
 
Gpu databases
Gpu databasesGpu databases
Gpu databases
 
GPU Ecosystem
GPU EcosystemGPU Ecosystem
GPU Ecosystem
 
PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...
PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...
PT-4056, Harnessing Heterogeneous Systems Using C++ AMP – How the Story is Ev...
 
Dad i want a supercomputer on my next
Dad i want a supercomputer on my nextDad i want a supercomputer on my next
Dad i want a supercomputer on my next
 
Gpu
GpuGpu
Gpu
 
Introduction to HSA
Introduction to HSAIntroduction to HSA
Introduction to HSA
 
Gpu acceleration for simulating massively parallel many core platforms
Gpu acceleration for simulating massively parallel many core platformsGpu acceleration for simulating massively parallel many core platforms
Gpu acceleration for simulating massively parallel many core platforms
 
Danish presentation
Danish presentationDanish presentation
Danish presentation
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
Difference between parallel and sequential computing
Difference between parallel and sequential computingDifference between parallel and sequential computing
Difference between parallel and sequential computing
 
How to Meet Your P99 Goal While Overcommitting Another Workload
How to Meet Your P99 Goal While Overcommitting Another WorkloadHow to Meet Your P99 Goal While Overcommitting Another Workload
How to Meet Your P99 Goal While Overcommitting Another Workload
 
High-Performance Computing with C++
High-Performance Computing with C++High-Performance Computing with C++
High-Performance Computing with C++
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing Unit
 
GPUs vs CPUs for Parallel Processing
GPUs vs CPUs for Parallel ProcessingGPUs vs CPUs for Parallel Processing
GPUs vs CPUs for Parallel Processing
 
GRAPHICS PROCESSING UNIT (GPU)
GRAPHICS PROCESSING UNIT (GPU)GRAPHICS PROCESSING UNIT (GPU)
GRAPHICS PROCESSING UNIT (GPU)
 
GPU
GPUGPU
GPU
 
19564926 graphics-processing-unit
19564926 graphics-processing-unit19564926 graphics-processing-unit
19564926 graphics-processing-unit
 
DB Latency Using DRAM + PMem in App Direct & Memory Modes
DB Latency Using DRAM + PMem in App Direct & Memory ModesDB Latency Using DRAM + PMem in App Direct & Memory Modes
DB Latency Using DRAM + PMem in App Direct & Memory Modes
 
Gpu presentation
Gpu presentationGpu presentation
Gpu presentation
 

Similar a CUDA

Revisiting Co-Processing for Hash Joins on the Coupled Cpu-GPU Architecture
Revisiting Co-Processing for Hash Joins on the CoupledCpu-GPU ArchitectureRevisiting Co-Processing for Hash Joins on the CoupledCpu-GPU Architecture
Revisiting Co-Processing for Hash Joins on the Coupled Cpu-GPU Architecturemohamedragabslideshare
 
CUDA by Example : The Final Countdown : Notes
CUDA by Example : The Final Countdown : NotesCUDA by Example : The Final Countdown : Notes
CUDA by Example : The Final Countdown : NotesSubhajit Sahu
 
Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8AbdullahMunir32
 
Pycon2014 GPU computing
Pycon2014 GPU computingPycon2014 GPU computing
Pycon2014 GPU computingAshwin Ashok
 
Stream Processing
Stream ProcessingStream Processing
Stream Processingarnamoy10
 
Parallel Computing: Perspectives for more efficient hydrological modeling
Parallel Computing: Perspectives for more efficient hydrological modelingParallel Computing: Perspectives for more efficient hydrological modeling
Parallel Computing: Perspectives for more efficient hydrological modelingGrigoris Anagnostopoulos
 
GPU and Deep learning best practices
GPU and Deep learning best practicesGPU and Deep learning best practices
GPU and Deep learning best practicesLior Sidi
 
IIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanIIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanAnkita Dewan
 
IIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanIIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanAnkita Dewan
 
Evaluating GPU programming Models for the LUMI Supercomputer
Evaluating GPU programming Models for the LUMI SupercomputerEvaluating GPU programming Models for the LUMI Supercomputer
Evaluating GPU programming Models for the LUMI SupercomputerGeorge Markomanolis
 
GPGPU programming with CUDA
GPGPU programming with CUDAGPGPU programming with CUDA
GPGPU programming with CUDASavith Satheesh
 
Parallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxParallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxkrnaween
 
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONSA SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONScseij
 
OpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADOpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADDesign World
 
lecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxlecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxssuser413a98
 

Similar a CUDA (20)

Cuda
CudaCuda
Cuda
 
GPU Programming with Java
GPU Programming with JavaGPU Programming with Java
GPU Programming with Java
 
Revisiting Co-Processing for Hash Joins on the Coupled Cpu-GPU Architecture
Revisiting Co-Processing for Hash Joins on the CoupledCpu-GPU ArchitectureRevisiting Co-Processing for Hash Joins on the CoupledCpu-GPU Architecture
Revisiting Co-Processing for Hash Joins on the Coupled Cpu-GPU Architecture
 
CUDA by Example : The Final Countdown : Notes
CUDA by Example : The Final Countdown : NotesCUDA by Example : The Final Countdown : Notes
CUDA by Example : The Final Countdown : Notes
 
Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8Parallel and Distributed Computing Chapter 8
Parallel and Distributed Computing Chapter 8
 
Pycon2014 GPU computing
Pycon2014 GPU computingPycon2014 GPU computing
Pycon2014 GPU computing
 
Stream Processing
Stream ProcessingStream Processing
Stream Processing
 
Parallel Computing: Perspectives for more efficient hydrological modeling
Parallel Computing: Perspectives for more efficient hydrological modelingParallel Computing: Perspectives for more efficient hydrological modeling
Parallel Computing: Perspectives for more efficient hydrological modeling
 
Parallel Computing on the GPU
Parallel Computing on the GPUParallel Computing on the GPU
Parallel Computing on the GPU
 
Getting started with AMD GPUs
Getting started with AMD GPUsGetting started with AMD GPUs
Getting started with AMD GPUs
 
GPU and Deep learning best practices
GPU and Deep learning best practicesGPU and Deep learning best practices
GPU and Deep learning best practices
 
Cuda
CudaCuda
Cuda
 
IIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanIIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita Dewan
 
IIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita DewanIIT ropar_CUDA_Report_Ankita Dewan
IIT ropar_CUDA_Report_Ankita Dewan
 
Evaluating GPU programming Models for the LUMI Supercomputer
Evaluating GPU programming Models for the LUMI SupercomputerEvaluating GPU programming Models for the LUMI Supercomputer
Evaluating GPU programming Models for the LUMI Supercomputer
 
GPGPU programming with CUDA
GPGPU programming with CUDAGPGPU programming with CUDA
GPGPU programming with CUDA
 
Parallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptxParallel Computing-Part-1.pptx
Parallel Computing-Part-1.pptx
 
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONSA SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
A SURVEY ON GPU SYSTEM CONSIDERING ITS PERFORMANCE ON DIFFERENT APPLICATIONS
 
OpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADOpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CAD
 
lecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptxlecture11_GPUArchCUDA01.pptx
lecture11_GPUArchCUDA01.pptx
 

Último

AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjek
AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjekAIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjek
AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjekpavan402055
 
Dubai Call Girls O525547819 Spring Break Fast Call Girls Dubai
Dubai Call Girls O525547819 Spring Break Fast Call Girls DubaiDubai Call Girls O525547819 Spring Break Fast Call Girls Dubai
Dubai Call Girls O525547819 Spring Break Fast Call Girls Dubaikojalkojal131
 
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...Amil baba
 
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作ss846v0c
 
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...Amil Baba Dawood bangali
 
Kwin - Trang Tải App Game Kwin68 Club Chính Thức
Kwin - Trang Tải App Game Kwin68 Club Chính ThứcKwin - Trang Tải App Game Kwin68 Club Chính Thức
Kwin - Trang Tải App Game Kwin68 Club Chính ThứcKwin68 Club
 
RBS学位证,鹿特丹商学院毕业证书1:1制作
RBS学位证,鹿特丹商学院毕业证书1:1制作RBS学位证,鹿特丹商学院毕业证书1:1制作
RBS学位证,鹿特丹商学院毕业证书1:1制作f3774p8b
 
Computer Organization and Architecture 10th - William Stallings, Ch01.pdf
Computer Organization and Architecture 10th - William Stallings, Ch01.pdfComputer Organization and Architecture 10th - William Stallings, Ch01.pdf
Computer Organization and Architecture 10th - William Stallings, Ch01.pdfShahdAbdElsamea2
 
澳洲Deakin学位证,迪肯大学毕业证书1:1制作
澳洲Deakin学位证,迪肯大学毕业证书1:1制作澳洲Deakin学位证,迪肯大学毕业证书1:1制作
澳洲Deakin学位证,迪肯大学毕业证书1:1制作rpb5qxou
 

Último (9)

AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjek
AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjekAIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjek
AIMA_ch3_L2-complement.ppt kjekfkjekjfkjefkjefkjek
 
Dubai Call Girls O525547819 Spring Break Fast Call Girls Dubai
Dubai Call Girls O525547819 Spring Break Fast Call Girls DubaiDubai Call Girls O525547819 Spring Break Fast Call Girls Dubai
Dubai Call Girls O525547819 Spring Break Fast Call Girls Dubai
 
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...
Uae-NO1 Amil Baba In Karachi Kala Jadu In Karachi Amil baba In Karachi Addres...
 
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作
美国IUB学位证,印第安纳大学伯明顿分校毕业证书1:1制作
 
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uk England Northern ...
 
Kwin - Trang Tải App Game Kwin68 Club Chính Thức
Kwin - Trang Tải App Game Kwin68 Club Chính ThứcKwin - Trang Tải App Game Kwin68 Club Chính Thức
Kwin - Trang Tải App Game Kwin68 Club Chính Thức
 
RBS学位证,鹿特丹商学院毕业证书1:1制作
RBS学位证,鹿特丹商学院毕业证书1:1制作RBS学位证,鹿特丹商学院毕业证书1:1制作
RBS学位证,鹿特丹商学院毕业证书1:1制作
 
Computer Organization and Architecture 10th - William Stallings, Ch01.pdf
Computer Organization and Architecture 10th - William Stallings, Ch01.pdfComputer Organization and Architecture 10th - William Stallings, Ch01.pdf
Computer Organization and Architecture 10th - William Stallings, Ch01.pdf
 
澳洲Deakin学位证,迪肯大学毕业证书1:1制作
澳洲Deakin学位证,迪肯大学毕业证书1:1制作澳洲Deakin学位证,迪肯大学毕业证书1:1制作
澳洲Deakin学位证,迪肯大学毕业证书1:1制作
 

CUDA

  • 2. Topics: • What is CUDA? • What is FLOPS? • CUDA C. • Processing flow of CUDA. • What is GPGPU? • An Overview of Parallel Computing. • Serial Computing VS Parallel Computing. • World’s Most Powerful GPU.
  • 3. What is CUDA? CUDA stands for “Compute Unified Device Architecture”. It is a Parallel computing platform developed by NVIDIA (Graphics Card Manufacturer) in 2006 and implemented in their GPUs. CUDA gives developers, access to the instruction set and memory of the parallel computation elements in GPUs. CUDA enabled GPUs can be used for General Purpose Processing (GPGPU).
  • 4. CUDA Platform: The CUDA platform is accessible to programmers by several CUDA accelerated libraries and by several extended standard programming languages like: • C extended to CUDA C. • C++ extended to CUDA C++. • Fortran extended to CUDA Fortran NVIDIA CUDA™ technology is the world’s only C language environment that enables programmers and developers to write software to solve complex computational problems in a fraction of the time by tapping into the many-core parallel processing power of GPU's.
  • 5. CUDA C: The user writes a C code, while the compiler divides the code into two portions. One portion is delivered to CPU (because CPU is best for such tasks), while the other portion, involving extensive calculations (FLOPS), is delivered to the GPU(s), that executes the code in parallel. Because C is a familiar programming language, CUDA results in very steep learning curve and hence it is becoming a favorite tool for accelerating various applications. FLOPS: FLOPS an acronym for “FLoating - point Operations Per Second” is a unit to measure computer performance, useful in fields of scientific calculations that make heavy use of floating-point calculations. For such cases it is a more accurate measure than the generic instructions per second.
  • 6. CUDA C Sample Code: #include < stdio.h > #include < cuda.h > __global__ void kernel_function(int a, int b ) { printf(“ The value is %d”,a*b); } int main( void ) { kernel_function<<<1,1>>>(5,2); return 0; }
  • 7. Processing Flow of CUDA: • CPU copies data from main memory to GPU memory. • CPU instructs the processes to GPU. • GPU executes the processes parallel in each core. • CPU copies the result from GPU memory to main memory.
  • 8. What is GPGPU? GPGPU stands for “General purpose Graphics processing unit” . It is an approach which describes the use of a GPU for those computations in applications which are generally handled by the Central processing Unit (CPU). As in CPU, there are several no. of cores working parallel where as in a GPU there are hundred of cores working in parallel which could be very useful if we utilize them in general computations. It is also called GPU Accelerated Computing.
  • 9. Parallel Computing: It is a form of computation in which many calculations are carried out simultaneously which results in quick completion of calculations. All the GPUs works on parallel computing whereas in CPUs , parallel computing is in the form of multiple cores as they can entertain many processes simultaneously for an time efficient solution.
  • 10. Serial Computing VS Parallel Computing Serial Computing: • To be run on a single computer having a single Central Processing Unit (CPU); • A problem is broken into a discrete series of instructions. • Instructions are executed one after another. • Only one instruction may execute at any moment in time. Parallel Computing: • Problem is broken into discrete parts that can be solved concurrently • Each part is further broken down to a series of instructions • Instructions from each part execute simultaneously on different processors • An overall control/coordination mechanism.