Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Available HPC resources at CSUC

56 visualizaciones

Publicado el

Presentació a càrrec d'Adrián Macía (tècnic líder d'Aplicacions al CSUC) duta a terme a la jornada de formació "Com usar el servei de càlcul del CSUC" celebrada el 8 d'octubre de 2019 al CSUC.

Publicado en: Tecnología
  • Sé el primero en comentar

  • Sé el primero en recomendar esto

Available HPC resources at CSUC

  1. 1. Available HPC resources at CSUC Adrián Macía 08 – 10 - 2019
  2. 2. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  3. 3. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  4. 4. What is the CSUC? • CSUC is a public consortium born from the join of CESCA and CBUC • Institutions part of the consortium: • Associated institutions:
  5. 5. What we do? Scientific computing Communications IT InfrastructuresProcurements Scientific documentation management Joint purchases Electronic administration
  6. 6. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  7. 7. HPC matters • Nowadays simulation is a fundamental tool to solve and uderstand problems in science and engineering Theory SimulationExperiment
  8. 8. HPC role in science and engineering • HPC allows the researchers to solve problems that otherwise cannot be afforded • Numerical simulationsare used in a wide variety of fields like: Chemistry and materials sciences Life and health sciences Mathematics, physics and engineering Astronomy, space and Earth sciences
  9. 9. Main applications per knowledge area Chemistry and materials science Vasp Siesta Gaussian ADF CP2K Life and health sciences Amber Gromacs NAMD Schrödinger VMD Mathematics, physics and engineering OpenFOAM FDS Code Aster Paraview Astronomy and Earth sciences WRF WPS
  10. 10. Software available • In the following link you can find a detailed list of the software installed: https://confluence.csuc.cat/display/ HPCKB/Installed+software • If you don't find your application ask for it to the support team and we will be happy to install it for you or help you in the installation process
  11. 11. Demography of the service: users • 32 research projects from 14 different institutions are using our HPC service. • These projects are distributedin: – 10 Large HPC projects (> 500.000 UC) – 1 Medium HPC project (250.000 UC) – 21 Small HPC projects (≤ 100.000 UC)
  12. 12. Demography of the service: jobs (I) Jobs per # cores
  13. 13. Demography of the service: jobs (II) % Jobs vs Memory/core
  14. 14. Top 10 apps per usage (2019)
  15. 15. Usage per knowledge area (2019)
  16. 16. Wait time of the jobs % Jobs vs wait time
  17. 17. Wait time vs Job core count
  18. 18. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  19. 19. Hardware facilities Canigo(2018) Bull Sequana X800 384 cores Intel SP Platinum 6148 9 TB RAM memory 33,18 Tflop/s Pirineus II(2018) Bull Sequana X550 2688 cores Intel SP Platinum 6148 4 nodes with 2 GPU + 4 Intel KNL nodes 283,66 TFlop/s
  20. 20. Canigó • Shared memory machines (2 nodes) • 33.18 Tflop/s peak performance (16,59 per node) • 384 cores (8 cpus Intel SP Platinum 8168 per node) • Frequency of 2,7 GHz • 4,6 TB main memory per node • 20 TB disk storage
  21. 21. 4 nodes with 2 x GPGPU • 48 cores (2x Intel SP Platinum 8168, 2.7 GHz) • 192 GB main memory • 4.7 Tflop/s per GPGPU 4 Intel KNL nodes • 1 x Xeon-Phi 7250 (68 cores @ 1.5 GHz, 4 hw threads) • 384 GB main memory per node • 3.5 Tflop/s per node Pirineus II
  22. 22. Standard nodes (44 nodes) • 48 cores (2x Intel SP Platinum 6148, 2.7 GHz) • 192 GB main memory (4 GB/core) • 4 TB disk storage per node High memory nodes (6 nodes) • 48 cores (2x Intel SP Platinum 6148, 2.7 GHz) • 384 GB main memory (8 GB/core) • 4 TB disk storage per node Pirineus II
  23. 23. New high performace scratch system • New high performance storage available based on BeeGFS • 180 TB total space available • Very high read / write speed • Infiniband HDR direct connection (100 Gbps) between the BeeGFS cluster and the compute nodes.
  24. 24. HPC Service infrastructure at CSUC Canigó Pirineus II
  25. 25. Summary of HW infrastructure Canigó Pirineus II TOTAL Nuclis 384 2 688 3 072 Rpuntatotal (TFlop/s) 33.18 283.66 317 Consum total(kW) 5.24 32.80 38 Eficiència (Tflop/s/kW) 6.33 8.65 8.34
  26. 26. Evolution of the performance of HPC at CSUC 10 x
  27. 27. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  28. 28. Working environment • The working environment is shared between all the users of the service. • Each machine is managed by GNU/Linux operating system (Red Had). • Computational resources are managed by the Slurm Workload manager. • Compilers and development tools availble: Intel, GNU and PGI
  29. 29. Batch manager: Slurm • Slurm manages the available resources in order to have an optimal distributionbetween all the jobs in the system • Slurm assign different priority to each job depending on a lot of factors … more on this after the coffee!
  30. 30. Storage units (*) There is a limit per project depending on the number of users, the project quotas are 4, 8 and 16 GB for 5, 10 and 20 users respectively. We are working in improving this limits right now.
  31. 31. How to access to our services? • You can apply for a RES (red española de supercomputación) project asking to work at CSUC (in pirineus II or canigo). More information about this on https://www.res.es/es/acceso-a-la- res • If you are not granted with a RES project or you are not interested in applying for it you can still work with us. More info in https://www.csuc.cat/ca/supercomputacio/sol licitud-d-us
  32. 32. HPC Service price Academic project¹ Initial block - Group I: 500.000 UC 8.333,33 € - Group II: 250.000 UC 5.555,55 € - Group III: 100.000 UC 3.333,33 € Additional 50.000 UC block - When you have paid for 500.000 UC 280 €/block - When you have paid for 250.000 UC 1.100 €/block - When you have paid for 100.000 UC 1.390 €/block DGR discount for catalan academic groups -10 % ¹10% discount for Catalan entities due to the funding that we receive from DGR
  33. 33. Accounting HPC resources • In order to quantify the used resources we introduce the UC as a unit. • UC: Computational unit. It is defined as UC = HC(Computacional Hour) x factor – For standard nodes, 1HC = 1UC. Factor = 1. – For GPU nodes, 1HC = 1UC. Factor = 1. (*) – For KNL nodes, 1HC = 0,5 UC. Factor = 0,5. (**) – Per a canigó (SMP), 1HC = 2UC. Factor = 2 (*) You need to allocate a full socket (24 cores) at minimum (**) You need to allocate the full node (72 cores)
  34. 34. Choosing your architecture: HPC partitions // queues • We have 4 partitions available for the users: std, gpu, knl, mem workingon standard, gpu, knl or shared memory nodes. • Initially the user can only use std partition but if any user wants to use a different architecture only need to request permission and it will be granted. … more on this later...
  35. 35. Do you need help? http://hpc.csuc.cat
  36. 36. Documentation: HPC Knowledge Base http://hpc.csuc.cat
  37. 37. Problems or requests? Service Desk http://hpc.csuc.cat
  38. 38. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  39. 39. Development tools @ CSUC HPC • Compilers available for the users: – Intel compilers – PGI compilers – GNU compilers • MPI libraries: – Open MPI – Intel MPI – MPICH – MVAPICH
  40. 40. Development tools @ CSUC HPC • Intel Advisor, VTune, ITAC, Inspector • Scalasca • Mathematical libraries: – Intel MKL – Lapack – Scalapack – FFTW • If you need anything that is not installed let us know
  41. 41. Questions?
  42. 42. MOLTES GRÀCIES http://hpc.csuc.cat CristianGomollón Adrián Macía Ricard de laVega Ismael Fernàndez VíctorPérez

×