The document summarizes available HPC resources at CSUC, including hardware facilities, the working environment, development tools, and how to access services. The main systems are Canigó with 384 cores and 33 TFlops peak performance, and Pirineus II with 2,688 cores and 284 TFlops. Resources are managed by Slurm and available partitions include standard, GPU, and Intel KNL nodes. Users can access resources through RES projects or by purchasing compute units.
9. New paradigm: scientific computing
Science needs to solve problems
that, otherwise, can not be solved
Development of new theoretical and
technological tools
Problem resolution that drives to
new questions
14. Usage examples:
Simulations in life sciences
Interaction between SARS-CoV-2 spike
protein and different surfaces
Prediction of protein
structures using Artificial
Intelligence
15. Usage examples:
simulations in material science
Emergent structures
in ultracold materials
Graphene electronic structure
Adsorbed molecules in surfaces
16. Main applications per knowledge area
Chemistry and
Materials Science
Life and Health
Sciences
Mathematics, Physics
and Engineering
Astronomy and Earth
Sciences
17. Software available
In the following link you can find a detailed list of
the software
installed: https://confluence.csuc.cat/display/HPC
KB/Installed+software
If you don't find your application ask for it to the
support team and we will be happy to install it for
you or help you in the installation process
18. National Competence Center in HPC
• EuroCC is an H2020 European project
that want to establish a network of
National competence center (NCC) in
HPC, HPDA and AI in each country
involved on the project
https://www.eurocc-project.eu/
• The aim of the project is to promote the
usage of scientific computing, mainly
for SME's, but also in academia and
public administration
• We are participating in the Spanish NCC
with other 7 institutions that also provide
computing services.
https://eurocc-spain.res.es/
20. Demography of the service: users
47 research projects from 22 different institutions
are using our HPC service.
These projects are distributed in:
• 11 Large HPC projects (> 500.000 UC)
• 4 Medium HPC project (250.000 UC)
• 11 Small HPC projects (100.000 UC)
• 2 XSmall HPC projects (40.000 UC)
• 19 RES projects
30. Canigó
Shared memory machines
(2 nodes)
33.18 Tflop/s peak
performance (16,59 per
node)
384 cores (8 cpus Intel SP
Platinum 8168 per node)
Frequency of 2,7 GHz
4,6 TB main memory per
node
20 TB disk storage
31. 4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
32. Standard nodes (44 nodes)
• 48 cores (2x Intel SP Platinum
6148, 2.7 GHz)
• 192 GB main memory (4 GB/core)
• 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 6148, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
33. High performace scratch system
High performance storage available based
on BeeGFS
180 TB total space available
Very high read / write speed
Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
37. Summary
Who are we?
Scientific computing at CSUC
Hardware facilities
Working environment
Development environment
How to access our services?
38. Working environment
The working environment is shared between
all the users of the service.
Each machine is managed by GNU/Linux
operating system (Red Had).
Computational resources are managed by the
Slurm Workload manager.
Compilers and development tools availble: Intel,
GNU and PGI
39. Batch manager: Slurm
Slurm manages the available resources in order
to have an optimal distribution between all the
jobs in the system
Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
40. Storage units
(*) There is a limit per project depending on the project category. Group I: 200
GB, group II 100 GB, group III 50 GB, group IV 25 GB
Name Variable Availability Quota Time limit Backu
p
/home/$USER $HOME Global
25- 200
GB (*)
Unlimited Yes
/scratch/$USER/ − Global 1 TB 30 days No
/scratch/$USER/tmp/$J
OBID
$SCRATCH /
$SHAREDSCRAT
CH
Global 1 TB 7 days No
/tmp/$USER/$JOBID
$SCRATCH /
$LOCALSCRATC
H
Local node −
Job
execution
No
41. Choosing your architecture: HPC partitions // queues
We have 5 partitions available for the users: std,
std-fat, gpu, knl, mem working on standard,
standard fat, gpu, knl or shared memory nodes.
Each user can use any of them (except RES
users that are restricted to their own partitions)
depending on their needs
… more on this later...
45. Summary
Who are we?
Scientific computing at CSUC
Hardware facilities
Working environment
Development environment
How to access our services?
46. Development tools @ CSUC HPC
Compilers available for the users:
• Intel compilers
• PGI compilers
• GNU compilers
MPI libraries:
• Open MPI
• Intel MPI
• MPICH
• MVAPICH
47. Development tools @ CSUC HPC
Intel Advisor, VTune, ITAC, Inspector
Scalasca
Mathematical libraries:
• Intel MKL
• Lapack
• Scalapack
• FFTW
If you need anything that is not installed let us
know
48. Summary
Who are we?
Scientific computing at CSUC
Hardware facilities
Working environment
Development environment
How to access our services?
49. How to access to our services?
If you are not granted with a RES project or you
are not interested in applying for it you can still
work with us. More info
in https://www.csuc.cat/ca/supercomputacio/soll
icitud-d-us
50. HPC Service price
Academic project¹
Initial block
- Group I: 500.000 UC 8.333,33 €
- Group II: 250.000 UC 5.555,55 €
- Group III: 100.000 UC 3.333,33 €
- Group IV: 50.000 UC 1.666,66€
Additional 50.000 UC block
- When you have paid for 500.000 UC 280 €/block
- When you have paid for 250.000 UC 665 €/block
- When you have paid for 100.000 UC 945 €/block
- When you have paid for 40.000 UC 1.835 €/block
DGR discount for catalan academic
groups
-10 %
51. Accounting HPC resources
There are some considerations concerning the accounting
of HPC resources:
If you want to use the gpu partition you need to allocate a
full socket (24 cores) at minimum. This is imposed by the
fact that we don't want two different jobs sharing the
same GPU
If you want to use the KNL nodes you need to allocate
the full node (68 cores). Same reason that the previous
case
Each partition has an associated default memory per
core. If you need more than that you should ask for it and
the system will assign more cores (with its associated
memory) for your job.
52. Access through RES project
You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More
information about this
on https://www.res.es/es/acceso-a-la-res