SlideShare una empresa de Scribd logo
1 de 59
Descargar para leer sin conexión
High–Performance Computing
(HPC)
Prepared By:
Abdussamad Muntahi
Muhammad Rahman
1
Introduction to High–Performance Computing
(HPC)
2
Introduction
• High-speed computing. Originally pertaining only to
supercomputers for scientific research
• Tools and systems used to implement and create high
performance computing systems
• Used for scientific research or computational science
• Main area of discipline is developing parallel
processing algorithms and software so that programs
can be divided into small parts and can be executed
simultaneously by separate processors
• HPC systems have shifted from supercomputing to
computing clusters
3
What is Cluster?
• Cluster is a group of machines interconnected in a way that they
work together as a single system
• Used for better speed and capacity
• Types of Cluster
o High-availability (HA) clusters
o Load-balancing clusters
o Grid computing
• Terminology
o Node – individual machine in a cluster
o Head node – connected to both the private network of the cluster
and a public network and are used to access a given cluster.
Responsible for providing user an environment to work and
distributing task among other nodes
o Computer nodes – connected to only the private network of the
cluster and are generally used for running jobs assigned to them by
the head node(s)
4
Benefits of Cluster
• Reduced Cost
o The price of off-the-shelf consumer desktops has plummeted in recent
years, and this drop in price has corresponded with a vast increase in
their processing power and performance. The average desktop PC
today is many times more powerful than the first mainframe
computers.
• Processing Power
o The parallel processing power of a high-performance cluster can, in
many cases, prove more cost effective than a mainframe with similar
power. This reduced price-per-unit of power enables enterprises to get
a greater ROI (Return On Investment) from their IT budget.
• Scalability
o Perhaps the greatest advantage of computer clusters is the scalability
they offer. While mainframe computers have a fixed processing
capacity, computer clusters can be easily expanded as requirements
change by adding additional nodes to the network.
5
Benefits of Cluster
• Improved Network Technology
o Driving the development of computer clusters has been a vast
improvement in the technology related to networking, along
with a reduction in the price of such technology.
o In clusters, computers are typically connected via a single virtual
local area network (VLAN), and the network treats each
computer as a separate node. Information can be passed
throughout these networks with very little lag, ensuring that
data doesn’t bottleneck between nodes.
• Availability
o When a mainframe computer fails, the entire system fails.
However, if a node in a computer cluster fails, its operations can
be simply transferred to another node within the cluster,
ensuring that there is no interruption in service.
6
Application of HPC
• Used to solve complex modeling problems in a spectrum of
disciplines
• Topics include:
• HPC is currently applied to business uses as well
o data warehouses
o line-of-business (LOB) applications
o transaction processing
7
o Artificial intelligence
o Climate modeling
o Cryptographic analysis
o Geophysics
o Molecular biology
o Molecular dynamics
o Nuclear physics
o Physical oceanography
o Plasma physics
o Quantum physics
o Quantum chemistry
o Solid state physics
o Structural dynamics.
Top 10 Supercomputers for HPC
June 2011
8
Copyright (c) 2000-2009 TOP500.Org | All trademarks and copyrights on this page are owned by their respective owners
Typical Cluster
9
Fastest Supercomputer in USA:
Jaguar @ Oak Ridge National Lab
10
http://computing.ornl.gov
Hardware and Our Cluster
SIHPCI (maxwell)
11
Cluster Architecture: Typical
12
Cluster Architecture: Maxwell
13
The Cluster: maxwell
Tech Specs:
 No. of nodes: 106
 Each node is a Intel dual CPU
Quad Core 2.3 GHz Processor
 Total No. of cores: 848
 RAM per node: 8 GB
 Storage Memory: 90 TB
14
Hardware: Master/Head Node
• Head node is responsible for providing user an environment
to work and distributing task among other nodes
• Minimum Specification
o CPU of i586 or above
o A network interface card that supports a TCP/IP stack
o At least 4GB total free space – 2GB under and 2GB under /var
o A Floppy Drive
o A CD-Rom Drive
15
Front End
Back End
Hardware: Master/Head Node
• Maxwell Specification
o Server format Rack
o CPU family Intel Xeon
o CPU nominal frequency 2.26GHz
o Processor Model Xeon E5520
o Processors supplied 2 Quad core
o Memory RAM capacity 24GB Memory (6x4GB),
o Memory type DDR3
o Memory frequency 1066MHz Quad Ranked RDIMMs
o Storage HDD 146GB 15K RPM Serial-Attach SCSI
o RAID module PERC 6/i SAS RAID Controller 2x4 Connectors
o Gigabit LAN ports 2
o Power supply rating 480W
o Idle power consumption 150W
o Peak power consumption 270W
o OS Red Hat Enterprise Linux 53AP x32 And x64
16
Front End
Hardware: Computing Node (Client)
• Dedicated for Computation
• Minimum Specification
o CPU of i586 or above
o A disk on each client node, at least 2GB in size
o A network interface card that supports a TCP/IP stack
o All clients must have the same architecture (e.g., ia32 vs. ia64)
o Monitors and keyboards may be helpful, but are not required
o Floppy or PXE enable BIOS
o A CD-Rom Drive
17
Two Quad Core
processors
Memory slot
Memory slot
Hardware: Computing Node (Client)
• Maxwell Specification
o CPU family Intel Xeon
o CPU nominal frequency 2.13GHz
o Processors supplied 2 quad core
o Memory RAM capacity 8GB Memory (4x2GB)
o Memory type DDR3
o Memory frequency 1333MHz Dual Ranked UDIMMs
o storage HDD 160GB 7.2K RPM SATA
o Gigabit LAN ports 2
o Power supply rating 480W
o Idle power consumption 115W
o Peak power consumption 188W
o OS Red Hat Linux 5 HPC
18
Front End
Hardware: Switch
• Minimum Specification
– The switch is necessary for communication between the nodes
– Each node (including the head node) should have its own port on the
switch. In other words, if there are one head node and 8 client nodes,
you need at a minimum a 9-port switch
19
Switch
Hardware: Switch
• Maxwell Specification
o Model: Power Connect 6248
o Port: 48 10/100/1000BASE-T auto-sensing Gigabit Ethernet switching ports
o 48 GbE(giga bit ethernet) Port Managed Switch, two 10GbE and Stacking
Capable
20
Power Connect 6248 Switch Stack
Hardware: Power Distribution Unit
• APC Switched Rack Power Distribution Units (PDUs) place rack equipment
power control in the hands of the IT Manager. Remote outlet level controls
allow power on/off functionality for power recycling to reboot locked-up
equipment and to avoid unauthorized use of individual outlets. Power
sequencing delays allow users to define the order in which to power up or
down attached equipment. Avoid circuit overload during power recovery
and extend uptime of critical equipment by prioritizing the load shedding.
o PDU plug type L6-30P
o PDU ModelAPC AP7541
o PDU Max Amperage Load 30
21
Hardware: External Storage Array
• Minimum Specification:
o Model Power Vault MD1000 Hard Drive
o Max Supported Capacity 1.1 TB
o Host Channels 2
o Data Transfer Rate 300 MBPs
o Supported Devices Hard drive , Disk array (RAID)
o Spindle Speed 15000 RPM
• Maxwell Specification
o Total storage array 6
o In Each Storage Array 15 HDD
o Each HDD has 1*1 TB
o Total Storage Capacity 6*15*1.1 TB
22
Front Side
Back Side
Hardware: KVM Switch
• KVM (Keyboard Video Mouse) Switch is a device used to connect a
keyboard, mouse and monitor to two or more computers. KVM switches
save money, time, space, equipment and power. These switches are also
widely deployed to control pools of servers in data centers. Some KVM
switches support user terminals at both ends that allow local and remote
access to all the computers or servers.
23
Hardware: Networking
• Clusters are interconnected with both GigE (Dell PowerConnect 6248 and 48 GbE
PortManaged
• Switch, 2xDell PowerConnect 3424 24 Port FE with 2 GbE Copper Ports and 2 GbE
Fiber SFP
• Ports and Infiniband (Dell 24-Port Internally Managed 9024 DDR InfiniBand Edge
Switch)
• Switches and cards
24
Switch Ports Cable connectivity
Maxwell (Front)
25
Maxwell (Back)
26
Software for High–Performance Computing
(HPC)
27
Software for HPC
• For effective use of cluster for HPC the following
tools are at our disposal
– Remote hardware management
 Remote power on/off
 Monitoring CPU (for temperature etc.)
– Cluster management
 Monitoring programs, system administration etc.
– Job scheduling
– Libraries/languages for parallel programming
 Massage Passing Interface (MPI)
28
Cluster Management
• Cluster management software offers
– Easy-to-use interface for managing clusters
– Automates the process of queuing jobs
– Matching the requirements of a job and the
resources available to the cluster
– Migrating jobs across the cluster
• Maxwell uses Red Hat Enterprise Linux
29
Cluster Management
• Red Hat Enterprise Linux
– Specially for the scientific computing purpose to
deploy clusters of systems that work together
– Excellent hardware detection and monitoring
capabilities
– Centralized authentication and logging services
– Fast IO (Input/Output)
30
Parallel Computing
• Form of computation in which many calculations are carried out
simultaneously, operating on the principle that large problems can
often be divided into smaller ones, which are then solved
concurrently i.e. "in parallel"
• Different forms of parallel computing
– Bit-level parallelism
– Instruction level parallelism
– Data parallelism
– Task parallelism
• Parallel Computer classification
– Multiple processing elements (multi-core and multi-processor) within
a single machine
– Using multiple computers to work on the same task - clusters, MPPs
(Massive Parallel Processing), and grids
31
Parallel Programming
• Parallel computer programs are more difficult to write than
sequential programs
• Potential problems
– Race condition (output depending on sequence or timing of
other events)
– Communication and synchronization between the different
subtasks
• HPC Parallel Programming Models associated with different
computing technology
– Single Instruction Multiple Data (SIMD) on Single Processors
– Multi-Process and Multi-Threading on SMP (symmetric
multiprocessing) Computers
– Message Passing Interface (MPI) on Clusters
32
Parallel Programming
• Message Passing Interface (MPI)
– An application programming interface (API) specification
that allows processes to communicate with one another by
sending and receiving messages
– Now a de facto standard for parallel programs running on
distributed memory systems in computer clusters and
supercomputers
– A massage passing API with language-independent
protocol and semantic specifications
– Support both point-to-point and collective communication
– Goals are high performance, scalability, and portability
– Consists of a specific set of routines (i.e. APIs) directly
callable from C, C++, Fortran and any language able to
interface with such libraries, including C#, Java or Python
33
Tutorial on Maxwell
34
Maxwell: A Brief Introduction
35
Tech Specs:
 No. of nodes: 106
 Each node is a Intel dual CPU
Quad Core 2.3 GHz Processor
 Total No. of cores: 848
 RAM per node: 8 GB
 Storage Memory: 90 TB
How to create an account?
• Send an email to
– Nancy Beasley nancyj0@siu.edu or
– Dr. Shaikh Ahmed ahmed@siu.edu
• Provide the following information
– Name
– Affiliation
– IP address of the computer(s) on SIUC network from which you
would access Maxwell
• Will receive an email with Log In information
36
How to create an account?
• IP address look-up
37
Go to Start
How to create an account?
• IP address look-up
38
Type in ‘cmd’ to go to
command prompt
How to create an account?
• IP address look-up
39
How to create an account?
• IP address look-up
40
Type in ‘ipconfig’ to
get the IP Address
How to create an account?
• IP address look-up
41
IP Address
Login Procedure
• Download ‘Putty’
– Web addresses
• http://www.putty.org/
• http://download.cnet.com/PuTTY/3000-7240_4-
10808581.html
– Run ‘Putty’
• Use Host Name or IP address of Maxwell
– Host Name: maxwell.ecehpc.siuc.edu
• Enable X11
42
Login Procedure
• Run ‘Putty’
– Start ‘Session’
43
Type in Host Name:
“maxwell.ecehpc.siuc.edu”
Login Procedure
• Run ‘Putty’
– Start ‘Session’
– Enable X11
• Connection > SSH > X11
44
Check ‘Enable X11
Forwarding’
Login Procedure
• Run ‘Putty’
– Start ‘Session’
– Enable X11
• Connection > SSH > X11
– Open
45
Press Open to start
the session
Login Procedure
46
Type in Your Login
ID
Example ‘nanoSIUC’
Login Procedure
47
Type in
Password
Login Procedure
48
Prompt with
your Login ID.
Interface for
User to use
Maxwell
MPI Test
• Copy “mpi_test” directory to your “home” directory
– Type in the following command
• cp –r /home/nanoSIUC/mpi_test .
49
Run MPI
50
Type in ‘ls’ to view
the content of your
account
Run MPI
51
Folder containing
files to test MPI
Run MPI
52
Change Directory
and view the files in
‘mpi_test’
Open to see the
content of these files
Run MPI
• File: ‘bsub.sh’
53
job_name
output_file
run_limit
memory_limit
processor_use
run_time
to run a
parallel
program
by MPI
Change this
(nanoSIUC) to
your login ID
Run MPI
54
• File: ‘submit_script.sh’
Creates a new directory to generate
output.
Directory name:
cpi_test_<number of nodes used>
Run MPI
• Script to Run MPI
– ./submit_script.sh bsub.sh <# of Processors> <input file>
– <# of Processors> is an integer
– <input file> is optional.
• If in different directory, use the path of the input file as well
55
Run MPI
• Script to Run MPI
56
Running MPI Program
<# of processors>
Output Directory
Run MPI
57
• Viewing Output
Output File
Content of Output
Directory
Run MPI
58
• Viewing Output
Content of Output
File
The End
59

Más contenido relacionado

La actualidad más candente

PFQ@ 9th Italian Networking Workshop (Courmayeur)
PFQ@ 9th Italian Networking Workshop (Courmayeur)PFQ@ 9th Italian Networking Workshop (Courmayeur)
PFQ@ 9th Italian Networking Workshop (Courmayeur)Nicola Bonelli
 
BKK16-311 EAS Upstream Stategy
BKK16-311 EAS Upstream StategyBKK16-311 EAS Upstream Stategy
BKK16-311 EAS Upstream StategyLinaro
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerYongseok Oh
 
Exaflop In 2018 Hardware
Exaflop In 2018   HardwareExaflop In 2018   Hardware
Exaflop In 2018 HardwareJacob Wu
 
Multi-IMA Partition Scheduling for Global I/O Synchronization
Multi-IMA Partition Scheduling for Global I/O SynchronizationMulti-IMA Partition Scheduling for Global I/O Synchronization
Multi-IMA Partition Scheduling for Global I/O Synchronizationrtsljekim
 
Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)Shien-Chun Luo
 
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...Heechul Yun
 
DPDK Integration: A Product's Journey - Roger B. Melton
DPDK Integration: A Product's Journey - Roger B. MeltonDPDK Integration: A Product's Journey - Roger B. Melton
DPDK Integration: A Product's Journey - Roger B. Meltonharryvanhaaren
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containersinside-BigData.com
 
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce RichardsonThe 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardsonharryvanhaaren
 
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...jemin lee
 
Generic Resource Manager - László Vadkerti, András Kovács
Generic Resource Manager - László Vadkerti, András KovácsGeneric Resource Manager - László Vadkerti, András Kovács
Generic Resource Manager - László Vadkerti, András Kovácsharryvanhaaren
 
A Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles
A Simplex Architecture for Intelligent and Safe Unmanned Aerial VehiclesA Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles
A Simplex Architecture for Intelligent and Safe Unmanned Aerial VehiclesHeechul Yun
 
BKK16-317 How to generate power models for EAS and IPA
BKK16-317 How to generate power models for EAS and IPABKK16-317 How to generate power models for EAS and IPA
BKK16-317 How to generate power models for EAS and IPALinaro
 
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data Center
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data CenterPLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data Center
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data CenterPROIDEA
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSandeep Patil
 

La actualidad más candente (20)

Tuned
TunedTuned
Tuned
 
PFQ@ 9th Italian Networking Workshop (Courmayeur)
PFQ@ 9th Italian Networking Workshop (Courmayeur)PFQ@ 9th Italian Networking Workshop (Courmayeur)
PFQ@ 9th Italian Networking Workshop (Courmayeur)
 
Infrastructure et serveurs HP
Infrastructure et serveurs HPInfrastructure et serveurs HP
Infrastructure et serveurs HP
 
BKK16-311 EAS Upstream Stategy
BKK16-311 EAS Upstream StategyBKK16-311 EAS Upstream Stategy
BKK16-311 EAS Upstream Stategy
 
Revisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS SchedulerRevisiting CephFS MDS and mClock QoS Scheduler
Revisiting CephFS MDS and mClock QoS Scheduler
 
Exaflop In 2018 Hardware
Exaflop In 2018   HardwareExaflop In 2018   Hardware
Exaflop In 2018 Hardware
 
Multi-IMA Partition Scheduling for Global I/O Synchronization
Multi-IMA Partition Scheduling for Global I/O SynchronizationMulti-IMA Partition Scheduling for Global I/O Synchronization
Multi-IMA Partition Scheduling for Global I/O Synchronization
 
General Purpose GPU Computing
General Purpose GPU ComputingGeneral Purpose GPU Computing
General Purpose GPU Computing
 
Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)
 
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...
MemGuard: Memory Bandwidth Reservation System for Efficient Performance Isola...
 
DPDK Integration: A Product's Journey - Roger B. Melton
DPDK Integration: A Product's Journey - Roger B. MeltonDPDK Integration: A Product's Journey - Roger B. Melton
DPDK Integration: A Product's Journey - Roger B. Melton
 
Learning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under ContainersLearning from ZFS to Scale Storage on and under Containers
Learning from ZFS to Scale Storage on and under Containers
 
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce RichardsonThe 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
The 7 Deadly Sins of Packet Processing - Venky Venkatesan and Bruce Richardson
 
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...
PACT19, MOSAIC : Heterogeneity-, Communication-, and Constraint-Aware Model ...
 
Generic Resource Manager - László Vadkerti, András Kovács
Generic Resource Manager - László Vadkerti, András KovácsGeneric Resource Manager - László Vadkerti, András Kovács
Generic Resource Manager - László Vadkerti, András Kovács
 
A Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles
A Simplex Architecture for Intelligent and Safe Unmanned Aerial VehiclesA Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles
A Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles
 
BKK16-317 How to generate power models for EAS and IPA
BKK16-317 How to generate power models for EAS and IPABKK16-317 How to generate power models for EAS and IPA
BKK16-317 How to generate power models for EAS and IPA
 
Unified Hardware Abstraction Layer with Device Masquerade
Unified Hardware Abstraction Layer with Device MasqueradeUnified Hardware Abstraction Layer with Device Masquerade
Unified Hardware Abstraction Layer with Device Masquerade
 
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data Center
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data CenterPLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data Center
PLNOG 8: Piotr Szolkowski - Bezpieczne i wysoce skalowalne Data Center
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf Weiser
 

Similar a Maxwell siuc hpc_description_tutorial

From Rack scale computers to Warehouse scale computers
From Rack scale computers to Warehouse scale computersFrom Rack scale computers to Warehouse scale computers
From Rack scale computers to Warehouse scale computersRyousei Takano
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsHPCC Systems
 
Parallel_and_Cluster_Computing.ppt
Parallel_and_Cluster_Computing.pptParallel_and_Cluster_Computing.ppt
Parallel_and_Cluster_Computing.pptMohmdUmer
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture Haris456
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
Design installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttuDesign installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttuAlan Sill
 
Introduction to embedded System.pptx
Introduction to embedded System.pptxIntroduction to embedded System.pptx
Introduction to embedded System.pptxPratik Gohel
 
Conference Paper: Universal Node: Towards a high-performance NFV environment
Conference Paper: Universal Node: Towards a high-performance NFV environmentConference Paper: Universal Node: Towards a high-performance NFV environment
Conference Paper: Universal Node: Towards a high-performance NFV environmentEricsson
 
TRACK B: Multicores & Network On Chip Architectures/ Oren Hollander
TRACK B: Multicores & Network On Chip Architectures/ Oren HollanderTRACK B: Multicores & Network On Chip Architectures/ Oren Hollander
TRACK B: Multicores & Network On Chip Architectures/ Oren Hollanderchiportal
 
Cluster Computing
Cluster ComputingCluster Computing
Cluster ComputingNIKHIL NAIR
 
Fast switching of threads between cores - Advanced Operating Systems
Fast switching of threads between cores - Advanced Operating SystemsFast switching of threads between cores - Advanced Operating Systems
Fast switching of threads between cores - Advanced Operating SystemsRuhaim Izmeth
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architectureinside-BigData.com
 
Cluster Computers
Cluster ComputersCluster Computers
Cluster Computersshopnil786
 

Similar a Maxwell siuc hpc_description_tutorial (20)

From Rack scale computers to Warehouse scale computers
From Rack scale computers to Warehouse scale computersFrom Rack scale computers to Warehouse scale computers
From Rack scale computers to Warehouse scale computers
 
OpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC SystemsOpenPOWER Acceleration of HPCC Systems
OpenPOWER Acceleration of HPCC Systems
 
Clustering
ClusteringClustering
Clustering
 
Parallel_and_Cluster_Computing.ppt
Parallel_and_Cluster_Computing.pptParallel_and_Cluster_Computing.ppt
Parallel_and_Cluster_Computing.ppt
 
Exascale Capabl
Exascale CapablExascale Capabl
Exascale Capabl
 
Cluster computer
Cluster  computerCluster  computer
Cluster computer
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
processor struct
processor structprocessor struct
processor struct
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
Danish presentation
Danish presentationDanish presentation
Danish presentation
 
B9 cmis
B9 cmisB9 cmis
B9 cmis
 
Design installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttuDesign installation-commissioning-red raider-cluster-ttu
Design installation-commissioning-red raider-cluster-ttu
 
Introduction to embedded System.pptx
Introduction to embedded System.pptxIntroduction to embedded System.pptx
Introduction to embedded System.pptx
 
Conference Paper: Universal Node: Towards a high-performance NFV environment
Conference Paper: Universal Node: Towards a high-performance NFV environmentConference Paper: Universal Node: Towards a high-performance NFV environment
Conference Paper: Universal Node: Towards a high-performance NFV environment
 
TRACK B: Multicores & Network On Chip Architectures/ Oren Hollander
TRACK B: Multicores & Network On Chip Architectures/ Oren HollanderTRACK B: Multicores & Network On Chip Architectures/ Oren Hollander
TRACK B: Multicores & Network On Chip Architectures/ Oren Hollander
 
Cluster Computing
Cluster ComputingCluster Computing
Cluster Computing
 
Fast switching of threads between cores - Advanced Operating Systems
Fast switching of threads between cores - Advanced Operating SystemsFast switching of threads between cores - Advanced Operating Systems
Fast switching of threads between cores - Advanced Operating Systems
 
Assisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated ArchitectureAssisting User’s Transition to Titan’s Accelerated Architecture
Assisting User’s Transition to Titan’s Accelerated Architecture
 
Cluster Computers
Cluster ComputersCluster Computers
Cluster Computers
 

Último

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...anjaliyadav012327
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 

Último (20)

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
JAPAN: ORGANISATION OF PMDA, PHARMACEUTICAL LAWS & REGULATIONS, TYPES OF REGI...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 

Maxwell siuc hpc_description_tutorial

  • 3. Introduction • High-speed computing. Originally pertaining only to supercomputers for scientific research • Tools and systems used to implement and create high performance computing systems • Used for scientific research or computational science • Main area of discipline is developing parallel processing algorithms and software so that programs can be divided into small parts and can be executed simultaneously by separate processors • HPC systems have shifted from supercomputing to computing clusters 3
  • 4. What is Cluster? • Cluster is a group of machines interconnected in a way that they work together as a single system • Used for better speed and capacity • Types of Cluster o High-availability (HA) clusters o Load-balancing clusters o Grid computing • Terminology o Node – individual machine in a cluster o Head node – connected to both the private network of the cluster and a public network and are used to access a given cluster. Responsible for providing user an environment to work and distributing task among other nodes o Computer nodes – connected to only the private network of the cluster and are generally used for running jobs assigned to them by the head node(s) 4
  • 5. Benefits of Cluster • Reduced Cost o The price of off-the-shelf consumer desktops has plummeted in recent years, and this drop in price has corresponded with a vast increase in their processing power and performance. The average desktop PC today is many times more powerful than the first mainframe computers. • Processing Power o The parallel processing power of a high-performance cluster can, in many cases, prove more cost effective than a mainframe with similar power. This reduced price-per-unit of power enables enterprises to get a greater ROI (Return On Investment) from their IT budget. • Scalability o Perhaps the greatest advantage of computer clusters is the scalability they offer. While mainframe computers have a fixed processing capacity, computer clusters can be easily expanded as requirements change by adding additional nodes to the network. 5
  • 6. Benefits of Cluster • Improved Network Technology o Driving the development of computer clusters has been a vast improvement in the technology related to networking, along with a reduction in the price of such technology. o In clusters, computers are typically connected via a single virtual local area network (VLAN), and the network treats each computer as a separate node. Information can be passed throughout these networks with very little lag, ensuring that data doesn’t bottleneck between nodes. • Availability o When a mainframe computer fails, the entire system fails. However, if a node in a computer cluster fails, its operations can be simply transferred to another node within the cluster, ensuring that there is no interruption in service. 6
  • 7. Application of HPC • Used to solve complex modeling problems in a spectrum of disciplines • Topics include: • HPC is currently applied to business uses as well o data warehouses o line-of-business (LOB) applications o transaction processing 7 o Artificial intelligence o Climate modeling o Cryptographic analysis o Geophysics o Molecular biology o Molecular dynamics o Nuclear physics o Physical oceanography o Plasma physics o Quantum physics o Quantum chemistry o Solid state physics o Structural dynamics.
  • 8. Top 10 Supercomputers for HPC June 2011 8 Copyright (c) 2000-2009 TOP500.Org | All trademarks and copyrights on this page are owned by their respective owners
  • 10. Fastest Supercomputer in USA: Jaguar @ Oak Ridge National Lab 10 http://computing.ornl.gov
  • 11. Hardware and Our Cluster SIHPCI (maxwell) 11
  • 14. The Cluster: maxwell Tech Specs:  No. of nodes: 106  Each node is a Intel dual CPU Quad Core 2.3 GHz Processor  Total No. of cores: 848  RAM per node: 8 GB  Storage Memory: 90 TB 14
  • 15. Hardware: Master/Head Node • Head node is responsible for providing user an environment to work and distributing task among other nodes • Minimum Specification o CPU of i586 or above o A network interface card that supports a TCP/IP stack o At least 4GB total free space – 2GB under and 2GB under /var o A Floppy Drive o A CD-Rom Drive 15 Front End Back End
  • 16. Hardware: Master/Head Node • Maxwell Specification o Server format Rack o CPU family Intel Xeon o CPU nominal frequency 2.26GHz o Processor Model Xeon E5520 o Processors supplied 2 Quad core o Memory RAM capacity 24GB Memory (6x4GB), o Memory type DDR3 o Memory frequency 1066MHz Quad Ranked RDIMMs o Storage HDD 146GB 15K RPM Serial-Attach SCSI o RAID module PERC 6/i SAS RAID Controller 2x4 Connectors o Gigabit LAN ports 2 o Power supply rating 480W o Idle power consumption 150W o Peak power consumption 270W o OS Red Hat Enterprise Linux 53AP x32 And x64 16 Front End
  • 17. Hardware: Computing Node (Client) • Dedicated for Computation • Minimum Specification o CPU of i586 or above o A disk on each client node, at least 2GB in size o A network interface card that supports a TCP/IP stack o All clients must have the same architecture (e.g., ia32 vs. ia64) o Monitors and keyboards may be helpful, but are not required o Floppy or PXE enable BIOS o A CD-Rom Drive 17 Two Quad Core processors Memory slot Memory slot
  • 18. Hardware: Computing Node (Client) • Maxwell Specification o CPU family Intel Xeon o CPU nominal frequency 2.13GHz o Processors supplied 2 quad core o Memory RAM capacity 8GB Memory (4x2GB) o Memory type DDR3 o Memory frequency 1333MHz Dual Ranked UDIMMs o storage HDD 160GB 7.2K RPM SATA o Gigabit LAN ports 2 o Power supply rating 480W o Idle power consumption 115W o Peak power consumption 188W o OS Red Hat Linux 5 HPC 18 Front End
  • 19. Hardware: Switch • Minimum Specification – The switch is necessary for communication between the nodes – Each node (including the head node) should have its own port on the switch. In other words, if there are one head node and 8 client nodes, you need at a minimum a 9-port switch 19 Switch
  • 20. Hardware: Switch • Maxwell Specification o Model: Power Connect 6248 o Port: 48 10/100/1000BASE-T auto-sensing Gigabit Ethernet switching ports o 48 GbE(giga bit ethernet) Port Managed Switch, two 10GbE and Stacking Capable 20 Power Connect 6248 Switch Stack
  • 21. Hardware: Power Distribution Unit • APC Switched Rack Power Distribution Units (PDUs) place rack equipment power control in the hands of the IT Manager. Remote outlet level controls allow power on/off functionality for power recycling to reboot locked-up equipment and to avoid unauthorized use of individual outlets. Power sequencing delays allow users to define the order in which to power up or down attached equipment. Avoid circuit overload during power recovery and extend uptime of critical equipment by prioritizing the load shedding. o PDU plug type L6-30P o PDU ModelAPC AP7541 o PDU Max Amperage Load 30 21
  • 22. Hardware: External Storage Array • Minimum Specification: o Model Power Vault MD1000 Hard Drive o Max Supported Capacity 1.1 TB o Host Channels 2 o Data Transfer Rate 300 MBPs o Supported Devices Hard drive , Disk array (RAID) o Spindle Speed 15000 RPM • Maxwell Specification o Total storage array 6 o In Each Storage Array 15 HDD o Each HDD has 1*1 TB o Total Storage Capacity 6*15*1.1 TB 22 Front Side Back Side
  • 23. Hardware: KVM Switch • KVM (Keyboard Video Mouse) Switch is a device used to connect a keyboard, mouse and monitor to two or more computers. KVM switches save money, time, space, equipment and power. These switches are also widely deployed to control pools of servers in data centers. Some KVM switches support user terminals at both ends that allow local and remote access to all the computers or servers. 23
  • 24. Hardware: Networking • Clusters are interconnected with both GigE (Dell PowerConnect 6248 and 48 GbE PortManaged • Switch, 2xDell PowerConnect 3424 24 Port FE with 2 GbE Copper Ports and 2 GbE Fiber SFP • Ports and Infiniband (Dell 24-Port Internally Managed 9024 DDR InfiniBand Edge Switch) • Switches and cards 24 Switch Ports Cable connectivity
  • 27. Software for High–Performance Computing (HPC) 27
  • 28. Software for HPC • For effective use of cluster for HPC the following tools are at our disposal – Remote hardware management  Remote power on/off  Monitoring CPU (for temperature etc.) – Cluster management  Monitoring programs, system administration etc. – Job scheduling – Libraries/languages for parallel programming  Massage Passing Interface (MPI) 28
  • 29. Cluster Management • Cluster management software offers – Easy-to-use interface for managing clusters – Automates the process of queuing jobs – Matching the requirements of a job and the resources available to the cluster – Migrating jobs across the cluster • Maxwell uses Red Hat Enterprise Linux 29
  • 30. Cluster Management • Red Hat Enterprise Linux – Specially for the scientific computing purpose to deploy clusters of systems that work together – Excellent hardware detection and monitoring capabilities – Centralized authentication and logging services – Fast IO (Input/Output) 30
  • 31. Parallel Computing • Form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently i.e. "in parallel" • Different forms of parallel computing – Bit-level parallelism – Instruction level parallelism – Data parallelism – Task parallelism • Parallel Computer classification – Multiple processing elements (multi-core and multi-processor) within a single machine – Using multiple computers to work on the same task - clusters, MPPs (Massive Parallel Processing), and grids 31
  • 32. Parallel Programming • Parallel computer programs are more difficult to write than sequential programs • Potential problems – Race condition (output depending on sequence or timing of other events) – Communication and synchronization between the different subtasks • HPC Parallel Programming Models associated with different computing technology – Single Instruction Multiple Data (SIMD) on Single Processors – Multi-Process and Multi-Threading on SMP (symmetric multiprocessing) Computers – Message Passing Interface (MPI) on Clusters 32
  • 33. Parallel Programming • Message Passing Interface (MPI) – An application programming interface (API) specification that allows processes to communicate with one another by sending and receiving messages – Now a de facto standard for parallel programs running on distributed memory systems in computer clusters and supercomputers – A massage passing API with language-independent protocol and semantic specifications – Support both point-to-point and collective communication – Goals are high performance, scalability, and portability – Consists of a specific set of routines (i.e. APIs) directly callable from C, C++, Fortran and any language able to interface with such libraries, including C#, Java or Python 33
  • 35. Maxwell: A Brief Introduction 35 Tech Specs:  No. of nodes: 106  Each node is a Intel dual CPU Quad Core 2.3 GHz Processor  Total No. of cores: 848  RAM per node: 8 GB  Storage Memory: 90 TB
  • 36. How to create an account? • Send an email to – Nancy Beasley nancyj0@siu.edu or – Dr. Shaikh Ahmed ahmed@siu.edu • Provide the following information – Name – Affiliation – IP address of the computer(s) on SIUC network from which you would access Maxwell • Will receive an email with Log In information 36
  • 37. How to create an account? • IP address look-up 37 Go to Start
  • 38. How to create an account? • IP address look-up 38 Type in ‘cmd’ to go to command prompt
  • 39. How to create an account? • IP address look-up 39
  • 40. How to create an account? • IP address look-up 40 Type in ‘ipconfig’ to get the IP Address
  • 41. How to create an account? • IP address look-up 41 IP Address
  • 42. Login Procedure • Download ‘Putty’ – Web addresses • http://www.putty.org/ • http://download.cnet.com/PuTTY/3000-7240_4- 10808581.html – Run ‘Putty’ • Use Host Name or IP address of Maxwell – Host Name: maxwell.ecehpc.siuc.edu • Enable X11 42
  • 43. Login Procedure • Run ‘Putty’ – Start ‘Session’ 43 Type in Host Name: “maxwell.ecehpc.siuc.edu”
  • 44. Login Procedure • Run ‘Putty’ – Start ‘Session’ – Enable X11 • Connection > SSH > X11 44 Check ‘Enable X11 Forwarding’
  • 45. Login Procedure • Run ‘Putty’ – Start ‘Session’ – Enable X11 • Connection > SSH > X11 – Open 45 Press Open to start the session
  • 46. Login Procedure 46 Type in Your Login ID Example ‘nanoSIUC’
  • 48. Login Procedure 48 Prompt with your Login ID. Interface for User to use Maxwell
  • 49. MPI Test • Copy “mpi_test” directory to your “home” directory – Type in the following command • cp –r /home/nanoSIUC/mpi_test . 49
  • 50. Run MPI 50 Type in ‘ls’ to view the content of your account
  • 52. Run MPI 52 Change Directory and view the files in ‘mpi_test’ Open to see the content of these files
  • 53. Run MPI • File: ‘bsub.sh’ 53 job_name output_file run_limit memory_limit processor_use run_time to run a parallel program by MPI Change this (nanoSIUC) to your login ID
  • 54. Run MPI 54 • File: ‘submit_script.sh’ Creates a new directory to generate output. Directory name: cpi_test_<number of nodes used>
  • 55. Run MPI • Script to Run MPI – ./submit_script.sh bsub.sh <# of Processors> <input file> – <# of Processors> is an integer – <input file> is optional. • If in different directory, use the path of the input file as well 55
  • 56. Run MPI • Script to Run MPI 56 Running MPI Program <# of processors> Output Directory
  • 57. Run MPI 57 • Viewing Output Output File Content of Output Directory
  • 58. Run MPI 58 • Viewing Output Content of Output File