2. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
3. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
7. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
9. New paradigm: scientific computing
Science needs to solve problems
that, otherwise, can not be solved
Development of new theoretical and
technological tools
Problem resolution that drives to
new questions
13. Usage examples: Engineering simulations
Aerodynamics of a plane
Vibrations in structures Thermal simulation of lighting systems
Thermal distribution in a brake disc
14. Usage examples: Simulations in life sciences
Interaction between SARS-CoV-2 spike
protein and different surfaces
Prediction of protein
structures using Artificial
Intelligence
15. Usage examples: simulations in material science
Emergent structures
in ultracold materials
Graphene electronic structure
Adsorbed molecules in surfaces
16. Main applications per knowledge area
Chemistry and Materials
Science
Life and Health Sciences
Mathematics, Physics
and Engineering
Astronomy and Earth
Sciences
17. Software available
• In the following link you can find a detailed list of
the software
installed: https://confluence.csuc.cat/display/HPC
KB/Installed+software
• If you don't find your application ask for it to the
support team and we will be happy to install it for
you or help you in the installation process
18. Demography of the service: users
These projects are distributed in:
13 Large
HPC
projects (>
500.000
UC)
7Medium
HPC project
(250.000
UC)
3 Small
HPC
projects
(100.000
UC)
12 XSmall
HPC
projects
(50.000 UC)
2 Industrial
HPC
projects
13 RES
projects
2 Test
projects
52 research projects from 22
different institutions are using our
HPC service.
31. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
33. Canigó
• Shared memory machines
(2 nodes)
• 33.18 Tflop/s peak
performance (16,59 per
node)
• 384 cores (8 cpus Intel SP
Platinum 8168 per node)
• Frequency of 2,7 GHz
• 4,6 TB main memory per
node
• 20 TB disk storage per node
34. 4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• Nvidia P100 GPGPU
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
35. Standard nodes (44 nodes)
• 44 nodes:
- 48 cores (2x Intel SP Platinum 8168, 2.7
GHz)
- 192 GB main memory (4 GB/core)
- 4 TB disk storage per node
• 19 nodes:
- 48 cores (2x Intel SP Platinum 8268 2,9
GHz
- 192 GB main memory (4 GB/core)
- 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 8168, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
36. High performace scratch system
• High performance storage available based
on BeeGFS
• 180 TB total space available
• Very high read / write speed
• Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
38. Summary of HW infrastructure
Canigó Pirineus II TOTAL
Cores 384 3 504 3 888
Total Rpeak
(TFlop/s)
33 358 391
Power
consumption (kW)
5.24 31.93 37
Efficiency
(Tflop/s/kW)
6.33 11.6 10.5
39. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
40. Working environment
• The working environment is shared between
all the users of the service.
• Each machine is managed by GNU/Linux
operating system (Red Had).
• Computational resources are managed by the
Slurm Workload manager.
• Compilers and development tools availble: Intel,
GNU and PGI
41. Batch manager: Slurm
• Slurm manages the available resources in order
to have an optimal distribution between all the
jobs in the system
• Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
42. Storage units
(*) There is a limit per project depending on the project category. Group I: 200
GB, group II 100 GB, group III 50 GB, group IV 25 GB
Name Variable Availability Quota Time limit Backu
p
/home/$USER $HOME Global
25- 200
GB (*)
Unlimited Yes
/scratch/$USER/ − Global 1 TB 30 days No
/scratch/$USER/tmp/$J
OBID
$SCRATCH/
$SHAREDSCRAT
CH
Global 1 TB 7 days No
/tmp/$USER/$JOBID
$SCRATCH/
$LOCALSCRATC
H
Local node −
Job
execution
No
43. Choosing your architecture: HPC partitions // queues
• We have 5 partitions available for the users: std,
std-fat, gpu, knl, mem working on standard,
standard fat, gpu, knl or shared memory nodes.
• Each user can use any of them (except RES
users that are restricted to their own partitions)
depending on their needs
… more on this later...
47. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
48. Development tools @ CSUC HPC
• Compilers available for the users:
o Intel compilers
o PGI compilers
o GNU compilers
• MPI libraries:
o Open MPI
o Intel MPI
o MPICH
o MVAPICH
49. Development tools @ CSUC HPC
• Intel Advisor, VTune, ITAC, Inspector
• Scalasca
• Mathematical libraries:
o Intel MKL
o Lapack
o Scalapack
o FFTW
• If you need anything that is not installed let us
know
50. Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
51. How to access to our services?
• If you are not granted with a RES project or you
are not interested in applying for it you can still
work with us. More info
in https://www.csuc.cat/ca/supercomputacio/solli
citud-d-us
53. Accounting HPC resources
There are some considerations concerning the accounting
of HPC resources:
• If you want to use the gpu partition you need to allocate a
full socket (24 cores) at minimum. This is imposed by the
fact that we don't want two different jobs sharing the
same GPU
• If you want to use the KNL nodes you need to allocate
the full node (68 cores). Same reason that the previous
case
• Each partition has an associated default memory per
core. If you need more than that you should ask for it and
the system will assign more cores (with its associated
memory) for your job.
54. Access through RES project
• You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More information
about this on https://www.res.es/es/acceso-a-la-
res
56. EuroCC Spain testbed
• EuroCC is an H2020 European
project that want to establish a
network of National competence
center (NCC) in HPC, HPDA and
AI in each country involved on the
project
• The aim of the project is to
promote the usage of scientific
computing, mainly for SME's, but
also in academia and public
administration
• Our HPC resources are also
offered through the Spanish
national competence center in
HPC
https://www.eurocc-project.eu/
https://eurocc-spain.res.es/
57. Nombre
Nombre institución
Fecha
Goals:
• To install and operate a quantum computer based on superconductingqbits (BSC)and two
quantum emulators (CESGA and SCAYLE)
• Starta service of remote access to the quantum computer (cloud access) to facilitate the
access to the researchers interested in working with this new technology
• Provide support to the users of these quantum infrastructures
• Development of new quantum algorithms Desenvolupament d'algoritmes quàntics
aplicables a problemes reals, principalment enfocats al Quantum Machine Learning
Alejandro Jaramillo joined us as a quantum computing expert to provide support to
the users and promote the usage of the infrastructure
Quantum Spain Project