SlideShare a Scribd company logo
1 of 69
High-Performance Computing Ecosystemin Europe July 15th, 2009 Kimmo Koski CSC – The Finnish IT Center for Science
Topics Terminology and definitions Emerging trends Stakeholders On-going Grid and HPC activities Concluding remarks
Terminology and pointers HPC  High Performance Computing HET, http://www.hpcineuropetaskforce.eu/ High Performance Computing in Europe Taskforce, established in June 2006 with a mandate to draft a strategy for European HPC ecosystem Petaflop/s Performance figure 1015 floating point operations (calculations) in second e-IRG, http://www.eirg.eu e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework (political, technological and administrative) for the easy and cost-effective shared use of distributed electronic resources across Europe - particularly for grid computing, storage and networking. ESFRI, http://cordis.europa.eu/esfri/ European Strategy Forum on Research Infrastructures. The role of ESFRI is to support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest. RI Research Infrastructure
Terminology and pointers (cont.) PRACE, http://www.prace-project.eu/ Partnership for Advanced Computing in Europe EU FP7 project for preparatory phase in building the European petaflop computing centers, based on HET work DEISA-2, https://www.deisa.org/ Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centers that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope.  EGEE-III, http://www.eu-egee.org/ Enabling Grid for E-sciencE. The project provides researchers in academia and industry with access to a production level Grid infrastructure, independent of their geographic location.  EGI_DS, http://www.eu-egi.org/ An effort to establish a sustainable grid infrastructure in Europe GÉANT2, http://www.geant2.net/ Seventh generation of pan-European research and education network
Computational science infrastructure
Performance Pyramid National/regional centers, Grid-collaboration Local centers European HPC center(s) e-IRG PRACE TIER 0 DEISA-2 Capability  Computing EGEE-III TIER 1 Capacity  Computing TIER 2
Need to remember about petaflop/s… What do you mean with petaflop/s? Theoretical petaflop/s? LINPACK petaflop/s? Sustained petaflop/s for a single extremely parallel application? Sustained petaflop/s for multiple parallel applications? Note that between 1 and 4 there might be several years Petaflop/s hardware needs petaflop/s applications, which are not easy to program, or not even possible in many cases Do we even know how to scale over 100000 processors …
Emerging trends
ResearchCommunity-2 ResearchCommunity-1 ResearchCommunity-3 Human  interaction Human interaction Human  interaction Workspace Workspace Workspace Labs Labs Labs Scientific Data Scientific Data Scientific Data Computing,  Grid Computing,  Grid Computing,  Grid Network Network Network Global Virtual Research Communities
VirtualCommunity VirtualCommunity VirtualCommunity Human  interaction Human  interaction Human interaction Workspace Workspace Workspace Virtual Labs Virtual Labs Virtual Labs Scientific Data Scientific Data Scientific Data Grid Grid Grid Economies of Scale EfficiencyGains Network Network Network Global Virtual Research Communities Scientific Data Grid Network
Data and information explosion Petascale computing  produces exascale data
HPC is a part of a larger ecosystem DISCIPLINARIES, USER COMMUNITIES COMPETENCE SOFTWARE DEVELOPMENT DATA INFRASTRUCTURES AND SERVICES HPC AND GRID INFRASTRUCTURES
HPC Ecosystem to support the top The upper layers of the pyramid  HPC centers / services European projects (HPC/Grid, networking,  …) Activities which enable efficient usage of upper layers Inclusion of national HPC infrastructures Software development and scalability issues Competence development Interoperability between the layers
Stakeholders
Stakeholder categories in PRACE Providers of HPC services European HPC and grid projects Networking infrastructure providers Hardware vendors Software vendors and the software developing academic community End users and their access through related Research Infrastructures Funding bodies on a national and international level Policy setting organisations directly involved in developing the research infrastructure and political bodies like parliaments responsible for national and international legislation
Policy and strategy work  HET: HPC in Europe Taskforce http://www.hpcineuropetaskforce.eu/ e-IRG: e-Infrastructure Reflection Grouphttp://www.e-irg.org/ ESFRI: European Strategy Forum on Research Infrastructureshttp://www.cordis.lu/esfri/ ERA Expert Group on Research Infrastructures ESFRI
Some focus areas Collaboration between research and e-infrastructure providers Horizontal ICT services Balanced approach: more focus on data, software development and competence development Inclusion of different countries, different contribution levels New emerging technologies, innovative computing initiatives Global collaboration, for example Exascale computing initiative Policy work, resource exchange, sustainable services etc.
On-going Grid and HPC activities
EU infrastructure projects GEANT Number of data infrastructure projects
22 Supercomputing Drives Science through Simulation Environment Weather/ Climatology Pollution / Ozone Hole Ageing Society Medicine Biology Energy Plasma Physics Fuel Cells Materials/ Inf. Tech Spintronics Nano-science
23 PRACE Initiative History and First Steps Production of the HPC part ofthe ESFRI Roadmap;Creation of a vision, involving 15 European countries Signature of the MoU  Submission ofan FP7 project proposal Bringing scientists together Creation of the Scientific Case  Approval of the project  Project start HPCEUR HET 2004 2005 2006 2007 2008
24 HET: The Scientific Case Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean properties and variations weather and flood events Astrophysics, Elementary particle physics, Plasma physics systems, structures which span a large range of different length and time scales quantum field theories like QCD, ITER Material Science, Chemistry, Nanoscience understanding complex materials, complex chemistry, nanoscience the determination of electronic and transport properties Life Science system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine Engineering complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft,  virtual power plant
25 First success: HPC in ESFRI Roadmap The European Roadmap for Research Infrastructures is the first comprehensive definition at the European level Research Infrastructures areone of the crucial pillars of the European Research Area A European HPC service – impact foreseen: ,[object Object]
 attractiveness for researchers
 supporting industrial    development,[object Object]
27 Third success: the PRACE project Partnership for Advanced Computing in Europe PRACE EU Project of the European Commission 7th Framework Program Construction of new infrastructures - preparatory phase FP7-INFRASTRUCTURES-2007-1 Partners are 16  Legal Entities from 14 European countries Budget: 20 Mio € EU funding: 10 Mio € Duration: January 2008 – December 2009 Grant no: RI-211528
28 PRACE Partners
PRACE Work Packages WP1 Management WP2 Organizational concept WP3 Dissemination, outreach and training WP4 Distributed computing WP5 Deployment of prototype systems WP6 Software enabling for prototype systems WP7 Petaflop systems for 2009/2010 WP8 Future petaflop technologies 29
30 PRACE Objectives in a Nutshell Provide world-class systems for world-class science Create a single European entity  Deploy 3 – 5 systems of the highest performance level (tier-0) Ensure diversity of architectures Provide support and training PRACE will be created to stay
31 Representative Benchmark Suite Defined a set of applications benchmarks To be used in the procurement process for Petaflop/s systems 12 core applications, plus 8 additional applications Core:NAMD, VASP, QCD, CPMD, GADGET, Code_Saturne, TORB, ECHAM5, NEMO, CP2K, GROMACS, N3D Additional: AVBP, HELIUM, TRIPOLI_4, PEPC, GPAW, ALYA, SIESTA, BSIT Each application will be ported to appropriate subset of prototypes Synthetic benchmarks for architecture evaluation Computation, mixed-mode, IO, bandwidth, OS, communication Applications and Synthetic benchmarks integrated into JuBE Juelich Benchmark Environment
32 Mapping Applications to Architectures ,[object Object]
Based on the application analysis - expressed in a condensed, qualitative way
Need for different “general purpose” systems
There are promising emerging architectures
Will be more quantitative after benchmark runs on prototypes E = estimated
33 Installed prototypes IBM BlueGene/P (FZJ) 01-2008 IBM Power6 (SARA) 07-2008 Cray XT5 (CSC) 11-2008 IBM Cell/Power (BSC) 12-2008 NEC SX9, vector part (HLRS) 02-2009 Intel Nehalem/Xeon (CEA/FZJ)06-2009 33
34 Status June 2009 34 Summary of current Prototype Status
35 Web site and the dissemination channels The PRACE web presence with news, events, RSS feeds etc. http://www.prace-project.eu Alpha-Galileo service: 6500 journalists around the globe: http://www.alphagalileo.org Belief Digital Library HPC-magazines PRACE partner sites, top 10 HPC users The PRACE website, www.prace-project.eu
36 PRACE Dissemination Package PRACE WP3 has created a dissemination package including templates, brochures, flyers, posters, badges, t-shirts, USB-keys, badges etc. The PRACE logo Heavy Computing 10^15: the PRACE t-shirt PRACE USB-key
37
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  38 DEISA:	May 1st, 2004 – April 30th, 2008  DEISA2:	May 1st, 2008 – April 30th, 2011 DEISA Partners
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  39 DEISA Partners BSCBarcelona Supercomputing Centre 			Spain CINECA	      Consortio Interuniversitario per il Calcolo Automatico 	Italy CSCFinnish Information Technology Centre for Science 		Finland EPCCUniversity of Edinburgh and CCLRC 	                             	UK ECMWFEuropean Centre for Medium-Range Weather Forecast  	UK (int) FZJ  Research Centre Juelich					Germany HLRS High Performance Computing Centre Stuttgart 	   	Germany IDRIS Institut du Développement et des Ressources 	        	France 	      en Informatique Scientifique - CNRS LRZ Leibniz Rechenzentrum Munich			   	Germany RZG Rechenzentrum Garching of the Max Planck Society		Germany SARADutch National High Performance Computing 		Netherlands CEA-CCRT   Centre de Calcul Recherche et Technologie, CEA 		France KTH Kungliga Tekniska Högskolan 			   	Sweden CSCSSwiss National Supercomputing Centre 		   	Switzerland JSCCJoint Supercomputer Center of the Russian 		Russia 	      Academy of Sciences
DEISA 2008 Operating the European HPC Infrastructure >1 PetaFlop/s  Aggregated peak performance Most powerful European  supercomputers  for  most challenging projects Top-level Europe-wide application enabling Grand Challenge projects performed on a regular basis
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  41 DEISA Core Infrastructure and Services Dedicated High Speed Network  Common AAA Single sign on Accounting/budgeting Global Data Management High performance remote I/O and data sharing with 		  global file systems High performance transfers of large data sets  User Operational Infrastructure Distributed Common Production Environment (DCPE) Job management service  Common user support and help desk System Operational Infrastructure Common monitoring and information systems Common system operation Global Application Support
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  42 FUNET SURFnet DFN UKERNA GARR RENATER 1 Gb/s GRE tunnel  10 Gb/s wavelength 10 Gb/s routed 10 Gb/s switched RedIris DEISA dedicated high speed network
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  43 Super-UXNQS II DEISA Global File System(based on MC-GPFS) IBM P6 (& BlueGene/P) NEC SX8 IBM P6 (& BlueGene/P) AIXLL-MC AIXLL-MC GridFTP IBM P6 Cray XT4 AIXLL UNICOS/lc PBS Pro UNICOS/lc PBS Pro Cray XT4 & XT5 SGI ALTIX LINUX PBS Pro AIXLL-MC AIXLL-MC IBM P5 LINUX Maui/Slurm LINUX LL IBM P6 & BlueGene/P IBM P6 IBM PPC
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  44 Multiple ways to access Presen-tation layer Common production environm. Single monitor system Co-reservation and co-allocation Job manag. layer and  monitor. DEISA Sites Data transfer tools Job rerouting Workflow managem. Data staging tools WAN shared filesystem Data manag. layer Unified AAA Network connec-tivity Network and AAA layers DEISA Software Layers
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  45 Supercomputer Hardware Performance Pyramid Supercomputer Application Enabling Requirements Pyramid Capability computing will always need expert support for application enabling and optimizations The more resource demanding one single problem is, the higher are generally the requirements for application enabling including enhancing scalability  EU National Local
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  46 DEISA Organizational Structure WP1 – Management WP2 – Dissemination, External Relations, Training WP3 – Operations WP4 – Technologies WP5 – Applications Enabling WP6 – User Environment and Support WP7 – Extreme Computing (DECI) and Benchmark Suite WP8 – Integrated DEISA Development Environment WP9 – Enhancing Scalability
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  47 Evolution of Supercomputing Resources DEISA partners´ compute resources at DEISA project start: ~ 30 TF aggregated peak performance 2004 DEISA partners´ resources at DEISA2 project start: Over 1 PF aggregated peak performance on state-of-the art supercomputers 2008 Cray XT4 and XT5, Linux IBM Power5, Power6, AIX / Linux IBM BlueGene/P, Linux (frontend) IBM PowerPC, Linux (MareNostrum) SGI ALTIX 4700 (Itanium2 Montecito), Linux NEC SX8 vector system, Super UX Systems interconnected with dedicated 10Gb/s network links provided by GEANT2 and NRENs Fixed fraction of resources dedicated to DEISA usage
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  48 DEISA Extreme Computing Initiative 			(DECI) DECI launched in early 2005 to enhance DEISA’s impact     on science and technology  Identification, enabling, deploying and operation of “flagship” applications    in selected areas of science and technology Complex, demanding, innovative simulations requiring the exceptional     capabilities of DEISA Multi-national proposals especially encourage Proposals reviewed by national evaluation committees Projects chosen on the basis of innovation potential,     scientific excellence, relevance criteria, and national priorities  Most powerful HPC architectures in Europe for the most challenging projects Most appropriate supercomputer architecture selected for each project  Mitigation of the rapid performance decay of a single national supercomputer    within its short lifetime cycle of  typically about  5 years, as implied by Moore’s law
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  49 DEISA Extreme Computing Initiative Involvements in projects from DECI calls 2005, 2006, 2007: 157 research institutes and universities  from 15 European countries Austria 		Finland		France		Germany         	Hungary Italy		Netherlands 	Poland		Portugal 	Romania Russia		Spain		Sweden 	Switzerland	UK with collaborators from four other continents North America, South America, Asia, Australia
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  50 DEISA Extreme Computing Initiative Calls for Proposals for challenging supercomputing projects from all areas of science  DECI call 2005 51 proposals, 12 European countries involved, co-investigator from US) 	30 mio cpu-h requested 	29 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+) DECI call 2006 41 proposals, 12 European countries involved 	     co-investigators from N + S America, Asia  (US, CA, AR, ISRAEL) 	28 mio cpu-h requested  	23 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+) DECI call 2007 63 proposals, 14 European countries involved, co-investigators from                       N + S America, Asia, Australia  (US, CA, BR, AR, ISRAEL, AUS) 	70 mio cpu-h requested 	45 proposals accepted, ~30 mio cpu-h awarded (normalized to IBM P4+) DECI call 2008  66 proposals, 15 European countries involved, co-investigators from  	     N + S America, Asia, Australia 	134 mio cpu-h requested (normalized to IBM P4+) Evaluation in progress
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  51 DECI Project POLYRES Cover Story of Nature - May 24, 2007 Curvy membranes make proteins attractive For almost two decades, physicists have been on the track of membrane mediated interactions. Simulations in DEISA have now revealed that curvy membranes make proteins attractive. Nature 447 (2007), 461-464 proteins (red) adhere on a membrane  	(blue/yellow) and  locally bend it;  this triggers a growing invagination.  cross-section through an almost         complete vesicle   B. J. Reynwar et al.: Aggregation and vesiculation of membrane proteins by curvature mediated interactions,  NATURE Vol 447|24 May 2007| doi:10.1038/nature05840
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  52 Achievements and Scientific Impact Brochures can be downloaded from http://www.deisa.eu/publications/results
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  53 2003 2004 2005 2006 2007 2008 2009 2010 2011 2002 Evolution of User Categories in DEISA Start of FP7 DEISA2 Start of FP6 DEISA DEISA EoI Support of  Virtual Communities and EU projects Single project support DEISA Extreme Computing Initiative Early adopters (Joint Research Activities) FP6 DEISA FP7 DEISA2 Preparatory Phase
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  54 Tier0 / Tier1 CentersAre there implications for the services? Main difference between T0 and T1 centers:  policy and usage models !   T1 centers can evolve to T0 for strategic/political reasons T0 machines automatically degrade to T1 level by aging T0 Centers  Leadership-class European systems in competition to the leading systems      worldwide, cyclically renewed Governance structure to be provided by European organization(PRACE) T1 Centers Leading national Centers, cyclically renewed, optionally       surpassing the performance of older T0 machines   National Governance structure Services have to be the same in T0/T1 Because of the change of the status of the systems, over time For user transparency of the different systems  	(Only visible: Some services could have different flavors for T0 and T1)
SC'08 Austin                      2008-11-19 Andreas Schott, DEISA  55 Summary Evolvement of this European infrastructure towards a robust and persistent European HPC ecosystem  Enhancing the existing services, by deploying new services including  support for European Virtual Communities, and by cooperating and  collaborating with new European initiatives, especially PRACE DEISA2 as the vector for the integration of Tier-0 and Tier-1  	systems in Europe To provide a lean and reliable turnkey operational solution  	for a persistent European HPC infrastructure Bridging worldwide HPC projects: To facilitate the support of international science communities with computational needs traversing existing political boundaries
EGEE Status April 2009  Infrastructure ,[object Object]
Number of countries connected to the EGEE infrastructure: 54
Number of CPUs (cores) available to users 24/7: ~139,000
Storage capacity available: ~ 25 PB disk + 38 PB tape MSSUsers ,[object Object]
Number of registered Virtual Organisations: >112
Number of registered users: > 13000
Number of people benefiting from the existence of the EGEE infrastructure: ~20000
Number of jobs: >390k jobs/day
Number of application domains making use of the EGEE infrastructure: more than 15 ,[object Object]

More Related Content

What's hot

Final rossana borello alps4_eu presentation_bruxelles
Final rossana borello alps4_eu presentation_bruxellesFinal rossana borello alps4_eu presentation_bruxelles
Final rossana borello alps4_eu presentation_bruxellesLeaKane
 
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC Overview - ExPaNDS kick-off meeting - September 2019
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC
 
ESCAPE cluster of Astronomy & Particle physics RIs,
ESCAPE cluster of Astronomy & Particle physics RIs,ESCAPE cluster of Astronomy & Particle physics RIs,
ESCAPE cluster of Astronomy & Particle physics RIs,EOSC-hub project
 
Interoperability in practice and FAIR data principles
Interoperability in practice and FAIR data principlesInteroperability in practice and FAIR data principles
Interoperability in practice and FAIR data principlesEOSCpilot .eu
 
Science Demonstrator Session: Social and Earth Sciences
Science Demonstrator Session: Social and Earth SciencesScience Demonstrator Session: Social and Earth Sciences
Science Demonstrator Session: Social and Earth SciencesEOSCpilot .eu
 
Plan4all final conference - Practical Information, Access & Accommodation
Plan4all final conference - Practical Information, Access & AccommodationPlan4all final conference - Practical Information, Access & Accommodation
Plan4all final conference - Practical Information, Access & Accommodationplan4all
 
National scale research computing and beyond pearc panel 2017
National scale research computing and beyond   pearc panel 2017National scale research computing and beyond   pearc panel 2017
National scale research computing and beyond pearc panel 2017Gregory Newby
 
The EOSC-hub: Integrating and managing services for the European Open Science...
The EOSC-hub: Integrating and managing services for the European Open Science...The EOSC-hub: Integrating and managing services for the European Open Science...
The EOSC-hub: Integrating and managing services for the European Open Science...EOSCpilot .eu
 
Soap box session - Intermediaries, Research communities & Libraries
Soap box session - Intermediaries, Research communities & LibrariesSoap box session - Intermediaries, Research communities & Libraries
Soap box session - Intermediaries, Research communities & LibrariesEOSCpilot .eu
 
OpenAIRE short presentation
OpenAIRE short presentationOpenAIRE short presentation
OpenAIRE short presentationOpenAIRE
 
Gergely Sipos (EGI): Exploiting scientific data in the international context ...
Gergely Sipos (EGI): Exploiting scientific data in the international context ...Gergely Sipos (EGI): Exploiting scientific data in the international context ...
Gergely Sipos (EGI): Exploiting scientific data in the international context ...Gergely Sipos
 
ESCAPE Kick-off meeting - Welcome (Feb 2019)
ESCAPE Kick-off meeting - Welcome (Feb 2019)ESCAPE Kick-off meeting - Welcome (Feb 2019)
ESCAPE Kick-off meeting - Welcome (Feb 2019)ESCAPE EU
 
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...giovanni biallo
 
OSFair2017 Workshop | The European Open Science Cloud Pilot
OSFair2017 Workshop | The European Open Science Cloud Pilot OSFair2017 Workshop | The European Open Science Cloud Pilot
OSFair2017 Workshop | The European Open Science Cloud Pilot Open Science Fair
 
PaNOSC: EOSC for Photon and Neutron Facilities Users
PaNOSC: EOSC for Photon and Neutron Facilities Users PaNOSC: EOSC for Photon and Neutron Facilities Users
PaNOSC: EOSC for Photon and Neutron Facilities Users EOSC-hub project
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsEuroCloud
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsEuroCloud
 
Data analytics and downscaling for climate research in a big data world
Data analytics and downscaling for climate research in a big data worldData analytics and downscaling for climate research in a big data world
Data analytics and downscaling for climate research in a big data worldBigData_Europe
 

What's hot (20)

Final rossana borello alps4_eu presentation_bruxelles
Final rossana borello alps4_eu presentation_bruxellesFinal rossana borello alps4_eu presentation_bruxelles
Final rossana borello alps4_eu presentation_bruxelles
 
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019PaNOSC Overview - ExPaNDS kick-off meeting - September 2019
PaNOSC Overview - ExPaNDS kick-off meeting - September 2019
 
ESCAPE cluster of Astronomy & Particle physics RIs,
ESCAPE cluster of Astronomy & Particle physics RIs,ESCAPE cluster of Astronomy & Particle physics RIs,
ESCAPE cluster of Astronomy & Particle physics RIs,
 
Interoperability in practice and FAIR data principles
Interoperability in practice and FAIR data principlesInteroperability in practice and FAIR data principles
Interoperability in practice and FAIR data principles
 
Science Demonstrator Session: Social and Earth Sciences
Science Demonstrator Session: Social and Earth SciencesScience Demonstrator Session: Social and Earth Sciences
Science Demonstrator Session: Social and Earth Sciences
 
Plan4all final conference - Practical Information, Access & Accommodation
Plan4all final conference - Practical Information, Access & AccommodationPlan4all final conference - Practical Information, Access & Accommodation
Plan4all final conference - Practical Information, Access & Accommodation
 
National scale research computing and beyond pearc panel 2017
National scale research computing and beyond   pearc panel 2017National scale research computing and beyond   pearc panel 2017
National scale research computing and beyond pearc panel 2017
 
The EOSC-hub: Integrating and managing services for the European Open Science...
The EOSC-hub: Integrating and managing services for the European Open Science...The EOSC-hub: Integrating and managing services for the European Open Science...
The EOSC-hub: Integrating and managing services for the European Open Science...
 
Soap box session - Intermediaries, Research communities & Libraries
Soap box session - Intermediaries, Research communities & LibrariesSoap box session - Intermediaries, Research communities & Libraries
Soap box session - Intermediaries, Research communities & Libraries
 
OpenAIRE short presentation
OpenAIRE short presentationOpenAIRE short presentation
OpenAIRE short presentation
 
Gergely Sipos (EGI): Exploiting scientific data in the international context ...
Gergely Sipos (EGI): Exploiting scientific data in the international context ...Gergely Sipos (EGI): Exploiting scientific data in the international context ...
Gergely Sipos (EGI): Exploiting scientific data in the international context ...
 
The European Open Science Cloud
The European Open Science CloudThe European Open Science Cloud
The European Open Science Cloud
 
ESCAPE Kick-off meeting - Welcome (Feb 2019)
ESCAPE Kick-off meeting - Welcome (Feb 2019)ESCAPE Kick-off meeting - Welcome (Feb 2019)
ESCAPE Kick-off meeting - Welcome (Feb 2019)
 
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...
LandCity Revolution - L'evoluzione del segmento di terra per sostenere l'era ...
 
OSFair2017 Workshop | The European Open Science Cloud Pilot
OSFair2017 Workshop | The European Open Science Cloud Pilot OSFair2017 Workshop | The European Open Science Cloud Pilot
OSFair2017 Workshop | The European Open Science Cloud Pilot
 
PaNOSC: EOSC for Photon and Neutron Facilities Users
PaNOSC: EOSC for Photon and Neutron Facilities Users PaNOSC: EOSC for Photon and Neutron Facilities Users
PaNOSC: EOSC for Photon and Neutron Facilities Users
 
Perx and TechXtra
Perx and TechXtraPerx and TechXtra
Perx and TechXtra
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviews
 
Pedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviewsPedro medeiros citi-cloudviews
Pedro medeiros citi-cloudviews
 
Data analytics and downscaling for climate research in a big data world
Data analytics and downscaling for climate research in a big data worldData analytics and downscaling for climate research in a big data world
Data analytics and downscaling for climate research in a big data world
 

Viewers also liked

Session 58 - Cloud computing, virtualisation and the future
Session 58 - Cloud computing, virtualisation and the future Session 58 - Cloud computing, virtualisation and the future
Session 58 - Cloud computing, virtualisation and the future ISSGC Summer School
 
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake Edlund
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake EdlundSession 58 :: Cloud computing, virtualisation and the future Speaker: Ake Edlund
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake EdlundISSGC Summer School
 
Session 40 : SAGA Overview and Introduction
Session 40 : SAGA Overview and Introduction Session 40 : SAGA Overview and Introduction
Session 40 : SAGA Overview and Introduction ISSGC Summer School
 
Session 48 - Principles of Semantic metadata management
Session 48 - Principles of Semantic metadata management Session 48 - Principles of Semantic metadata management
Session 48 - Principles of Semantic metadata management ISSGC Summer School
 
Session 49 Practical Semantic Sticky Note
Session 49 Practical Semantic Sticky NoteSession 49 Practical Semantic Sticky Note
Session 49 Practical Semantic Sticky NoteISSGC Summer School
 

Viewers also liked (14)

Integrating Practical2009
Integrating Practical2009Integrating Practical2009
Integrating Practical2009
 
Session 58 - Cloud computing, virtualisation and the future
Session 58 - Cloud computing, virtualisation and the future Session 58 - Cloud computing, virtualisation and the future
Session 58 - Cloud computing, virtualisation and the future
 
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake Edlund
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake EdlundSession 58 :: Cloud computing, virtualisation and the future Speaker: Ake Edlund
Session 58 :: Cloud computing, virtualisation and the future Speaker: Ake Edlund
 
Departure
DepartureDeparture
Departure
 
Session 33 - Production Grids
Session 33 - Production GridsSession 33 - Production Grids
Session 33 - Production Grids
 
Session 23 - Intro to EGEE-III
Session 23 - Intro to EGEE-IIISession 23 - Intro to EGEE-III
Session 23 - Intro to EGEE-III
 
Session10part1 Server Intro
Session10part1 Server IntroSession10part1 Server Intro
Session10part1 Server Intro
 
Application Form
Application FormApplication Form
Application Form
 
Session5 T Infr Access Emidio
Session5 T Infr Access EmidioSession5 T Infr Access Emidio
Session5 T Infr Access Emidio
 
Issgc Welcome
Issgc WelcomeIssgc Welcome
Issgc Welcome
 
Session 40 : SAGA Overview and Introduction
Session 40 : SAGA Overview and Introduction Session 40 : SAGA Overview and Introduction
Session 40 : SAGA Overview and Introduction
 
Session10part2 Servers Detailed
Session10part2  Servers DetailedSession10part2  Servers Detailed
Session10part2 Servers Detailed
 
Session 48 - Principles of Semantic metadata management
Session 48 - Principles of Semantic metadata management Session 48 - Principles of Semantic metadata management
Session 48 - Principles of Semantic metadata management
 
Session 49 Practical Semantic Sticky Note
Session 49 Practical Semantic Sticky NoteSession 49 Practical Semantic Sticky Note
Session 49 Practical Semantic Sticky Note
 

Similar to Session 50 - High Performance Computing Ecosystem in Europe

OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"
OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"
OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"OpenAIRE
 
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...SSHOC
 
Per Blixt - Fire results from call 5 and plans for call 7
Per Blixt - Fire results from call 5 and plans for call 7Per Blixt - Fire results from call 5 and plans for call 7
Per Blixt - Fire results from call 5 and plans for call 7Fire Conference 2010
 
European Open Science Cloud: Concept, status and opportunities
European Open Science Cloud: Concept, status and opportunitiesEuropean Open Science Cloud: Concept, status and opportunities
European Open Science Cloud: Concept, status and opportunitiesEOSC-hub project
 
Summer school bz_fp7research_20100708
Summer school bz_fp7research_20100708Summer school bz_fp7research_20100708
Summer school bz_fp7research_20100708Sandro D'Elia
 
BioDT for the UiO Science section meeting 2023-03-24
BioDT for the UiO Science section meeting 2023-03-24BioDT for the UiO Science section meeting 2023-03-24
BioDT for the UiO Science section meeting 2023-03-24Dag Endresen
 
FISTERA - a personal view
FISTERA - a personal viewFISTERA - a personal view
FISTERA - a personal viewIan Miles
 
META-NET and META-SHARE: An Overview
META-NET and META-SHARE: An OverviewMETA-NET and META-SHARE: An Overview
META-NET and META-SHARE: An OverviewGeorg Rehm
 
El nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeoEl nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeoAMETIC
 
The Legacy and the Future of Research Networks in Technology-Enhanced Learning
The Legacy and the Future of Research Networks in Technology-Enhanced LearningThe Legacy and the Future of Research Networks in Technology-Enhanced Learning
The Legacy and the Future of Research Networks in Technology-Enhanced LearningRalf Klamma
 
European Open Science Cloud: History and Status
European Open Science Cloud: History and StatusEuropean Open Science Cloud: History and Status
European Open Science Cloud: History and StatusMatthew Dovey
 
EOSC pilot STFC
EOSC pilot STFCEOSC pilot STFC
EOSC pilot STFCJisc RDM
 
The Developing Needs for e-infrastructures
The Developing Needs for e-infrastructuresThe Developing Needs for e-infrastructures
The Developing Needs for e-infrastructuresguest0dc425
 
European Policies for High Performance Computing
European Policies for High Performance ComputingEuropean Policies for High Performance Computing
European Policies for High Performance ComputingCarl-Christian Buhr
 
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC Services
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC ServicesPHIDIAS HPC – Building a prototype for Earth Science Data and HPC Services
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC ServicesPhidias
 
The European Open Science Cloud: just what is it?
The European Open Science Cloud: just what is it?The European Open Science Cloud: just what is it?
The European Open Science Cloud: just what is it?Jisc
 
T21.Fujitsu World Tour India 2016-Education, Research and Design
T21.Fujitsu World Tour India 2016-Education, Research and DesignT21.Fujitsu World Tour India 2016-Education, Research and Design
T21.Fujitsu World Tour India 2016-Education, Research and DesignFujitsu India
 
Software Sustainability Institute
Software Sustainability InstituteSoftware Sustainability Institute
Software Sustainability InstituteNeil Chue Hong
 

Similar to Session 50 - High Performance Computing Ecosystem in Europe (20)

OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"
OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"
OpenAIRE presentation at EuroCRIS Seminar "Evaluation of Research using a CRIS"
 
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...
Sshoc kick off meeting - 1.2.3 EOSC board - Social Sciences and Humanities Op...
 
Science in a Digital Age
Science in a Digital AgeScience in a Digital Age
Science in a Digital Age
 
Per Blixt - Fire results from call 5 and plans for call 7
Per Blixt - Fire results from call 5 and plans for call 7Per Blixt - Fire results from call 5 and plans for call 7
Per Blixt - Fire results from call 5 and plans for call 7
 
European Open Science Cloud: Concept, status and opportunities
European Open Science Cloud: Concept, status and opportunitiesEuropean Open Science Cloud: Concept, status and opportunities
European Open Science Cloud: Concept, status and opportunities
 
Summer school bz_fp7research_20100708
Summer school bz_fp7research_20100708Summer school bz_fp7research_20100708
Summer school bz_fp7research_20100708
 
BioDT for the UiO Science section meeting 2023-03-24
BioDT for the UiO Science section meeting 2023-03-24BioDT for the UiO Science section meeting 2023-03-24
BioDT for the UiO Science section meeting 2023-03-24
 
FISTERA - a personal view
FISTERA - a personal viewFISTERA - a personal view
FISTERA - a personal view
 
META-NET and META-SHARE: An Overview
META-NET and META-SHARE: An OverviewMETA-NET and META-SHARE: An Overview
META-NET and META-SHARE: An Overview
 
El nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeoEl nuevo superordenador Mare Nostrum y el futuro procesador europeo
El nuevo superordenador Mare Nostrum y el futuro procesador europeo
 
The Legacy and the Future of Research Networks in Technology-Enhanced Learning
The Legacy and the Future of Research Networks in Technology-Enhanced LearningThe Legacy and the Future of Research Networks in Technology-Enhanced Learning
The Legacy and the Future of Research Networks in Technology-Enhanced Learning
 
European Open Science Cloud: History and Status
European Open Science Cloud: History and StatusEuropean Open Science Cloud: History and Status
European Open Science Cloud: History and Status
 
EOSC pilot STFC
EOSC pilot STFCEOSC pilot STFC
EOSC pilot STFC
 
The Developing Needs for e-infrastructures
The Developing Needs for e-infrastructuresThe Developing Needs for e-infrastructures
The Developing Needs for e-infrastructures
 
European Policies for High Performance Computing
European Policies for High Performance ComputingEuropean Policies for High Performance Computing
European Policies for High Performance Computing
 
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC Services
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC ServicesPHIDIAS HPC – Building a prototype for Earth Science Data and HPC Services
PHIDIAS HPC – Building a prototype for Earth Science Data and HPC Services
 
The European Open Science Cloud: just what is it?
The European Open Science Cloud: just what is it?The European Open Science Cloud: just what is it?
The European Open Science Cloud: just what is it?
 
T21.Fujitsu World Tour India 2016-Education, Research and Design
T21.Fujitsu World Tour India 2016-Education, Research and DesignT21.Fujitsu World Tour India 2016-Education, Research and Design
T21.Fujitsu World Tour India 2016-Education, Research and Design
 
Conversatorio: estado de las National Research and Education Networks (NREN)...
 Conversatorio: estado de las National Research and Education Networks (NREN)... Conversatorio: estado de las National Research and Education Networks (NREN)...
Conversatorio: estado de las National Research and Education Networks (NREN)...
 
Software Sustainability Institute
Software Sustainability InstituteSoftware Sustainability Institute
Software Sustainability Institute
 

More from ISSGC Summer School

Session 49 - Semantic metadata management practical
Session 49 - Semantic metadata management practical Session 49 - Semantic metadata management practical
Session 49 - Semantic metadata management practical ISSGC Summer School
 
Session 46 - Principles of workflow management and execution
Session 46 - Principles of workflow management and execution Session 46 - Principles of workflow management and execution
Session 46 - Principles of workflow management and execution ISSGC Summer School
 
Session 37 - Intro to Workflows, API's and semantics
Session 37 - Intro to Workflows, API's and semantics Session 37 - Intro to Workflows, API's and semantics
Session 37 - Intro to Workflows, API's and semantics ISSGC Summer School
 
Session 43 :: Accessing data using a common interface: OGSA-DAI as an example
Session 43 :: Accessing data using a common interface: OGSA-DAI as an exampleSession 43 :: Accessing data using a common interface: OGSA-DAI as an example
Session 43 :: Accessing data using a common interface: OGSA-DAI as an exampleISSGC Summer School
 
Session 24 - Distribute Data and Metadata Management with gLite
Session 24 - Distribute Data and Metadata Management with gLiteSession 24 - Distribute Data and Metadata Management with gLite
Session 24 - Distribute Data and Metadata Management with gLiteISSGC Summer School
 
General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school ISSGC Summer School
 
Session 3-Distributed System Principals
Session 3-Distributed System PrincipalsSession 3-Distributed System Principals
Session 3-Distributed System PrincipalsISSGC Summer School
 

More from ISSGC Summer School (18)

Session 49 - Semantic metadata management practical
Session 49 - Semantic metadata management practical Session 49 - Semantic metadata management practical
Session 49 - Semantic metadata management practical
 
Session 46 - Principles of workflow management and execution
Session 46 - Principles of workflow management and execution Session 46 - Principles of workflow management and execution
Session 46 - Principles of workflow management and execution
 
Session 42 - GridSAM
Session 42 - GridSAMSession 42 - GridSAM
Session 42 - GridSAM
 
Session 37 - Intro to Workflows, API's and semantics
Session 37 - Intro to Workflows, API's and semantics Session 37 - Intro to Workflows, API's and semantics
Session 37 - Intro to Workflows, API's and semantics
 
Session 43 :: Accessing data using a common interface: OGSA-DAI as an example
Session 43 :: Accessing data using a common interface: OGSA-DAI as an exampleSession 43 :: Accessing data using a common interface: OGSA-DAI as an example
Session 43 :: Accessing data using a common interface: OGSA-DAI as an example
 
Session 36 - Engage Results
Session 36 - Engage ResultsSession 36 - Engage Results
Session 36 - Engage Results
 
Social Program
Social ProgramSocial Program
Social Program
 
Session29 Arc
Session29 ArcSession29 Arc
Session29 Arc
 
Session 24 - Distribute Data and Metadata Management with gLite
Session 24 - Distribute Data and Metadata Management with gLiteSession 24 - Distribute Data and Metadata Management with gLite
Session 24 - Distribute Data and Metadata Management with gLite
 
Session 23 - gLite Overview
Session 23 - gLite OverviewSession 23 - gLite Overview
Session 23 - gLite Overview
 
General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school General Introduction to technologies that will be seen in the school
General Introduction to technologies that will be seen in the school
 
Session 3-Distributed System Principals
Session 3-Distributed System PrincipalsSession 3-Distributed System Principals
Session 3-Distributed System Principals
 
Session18 Madduri
Session18  MadduriSession18  Madduri
Session18 Madduri
 
Session6 Security Emidio
Session6 Security  EmidioSession6 Security  Emidio
Session6 Security Emidio
 
Session9part1
Session9part1Session9part1
Session9part1
 
Session19 Globus
Session19 GlobusSession19 Globus
Session19 Globus
 
Session11 Ucc Intro
Session11 Ucc IntroSession11 Ucc Intro
Session11 Ucc Intro
 
Session9part2 Servers Detailed
Session9part2  Servers DetailedSession9part2  Servers Detailed
Session9part2 Servers Detailed
 

Recently uploaded

Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 

Recently uploaded (20)

Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 

Session 50 - High Performance Computing Ecosystem in Europe

  • 1. High-Performance Computing Ecosystemin Europe July 15th, 2009 Kimmo Koski CSC – The Finnish IT Center for Science
  • 2. Topics Terminology and definitions Emerging trends Stakeholders On-going Grid and HPC activities Concluding remarks
  • 3. Terminology and pointers HPC High Performance Computing HET, http://www.hpcineuropetaskforce.eu/ High Performance Computing in Europe Taskforce, established in June 2006 with a mandate to draft a strategy for European HPC ecosystem Petaflop/s Performance figure 1015 floating point operations (calculations) in second e-IRG, http://www.eirg.eu e-Infrastructure Reflection Group. e-IRG is supporting the creation of a framework (political, technological and administrative) for the easy and cost-effective shared use of distributed electronic resources across Europe - particularly for grid computing, storage and networking. ESFRI, http://cordis.europa.eu/esfri/ European Strategy Forum on Research Infrastructures. The role of ESFRI is to support a coherent approach to policy-making on research infrastructures in Europe, and to act as an incubator for international negotiations about concrete initiatives. In particular, ESFRI is preparing a European Roadmap for new research infrastructures of pan-European interest. RI Research Infrastructure
  • 4. Terminology and pointers (cont.) PRACE, http://www.prace-project.eu/ Partnership for Advanced Computing in Europe EU FP7 project for preparatory phase in building the European petaflop computing centers, based on HET work DEISA-2, https://www.deisa.org/ Distributed European Infrastructure for Supercomputing Applications. DEISA is a consortium of leading national supercomputing centers that currently deploys and operates a persistent, production quality, distributed supercomputing environment with continental scope. EGEE-III, http://www.eu-egee.org/ Enabling Grid for E-sciencE. The project provides researchers in academia and industry with access to a production level Grid infrastructure, independent of their geographic location. EGI_DS, http://www.eu-egi.org/ An effort to establish a sustainable grid infrastructure in Europe GÉANT2, http://www.geant2.net/ Seventh generation of pan-European research and education network
  • 6. Performance Pyramid National/regional centers, Grid-collaboration Local centers European HPC center(s) e-IRG PRACE TIER 0 DEISA-2 Capability Computing EGEE-III TIER 1 Capacity Computing TIER 2
  • 7. Need to remember about petaflop/s… What do you mean with petaflop/s? Theoretical petaflop/s? LINPACK petaflop/s? Sustained petaflop/s for a single extremely parallel application? Sustained petaflop/s for multiple parallel applications? Note that between 1 and 4 there might be several years Petaflop/s hardware needs petaflop/s applications, which are not easy to program, or not even possible in many cases Do we even know how to scale over 100000 processors …
  • 9.
  • 10. ResearchCommunity-2 ResearchCommunity-1 ResearchCommunity-3 Human interaction Human interaction Human interaction Workspace Workspace Workspace Labs Labs Labs Scientific Data Scientific Data Scientific Data Computing, Grid Computing, Grid Computing, Grid Network Network Network Global Virtual Research Communities
  • 11. VirtualCommunity VirtualCommunity VirtualCommunity Human interaction Human interaction Human interaction Workspace Workspace Workspace Virtual Labs Virtual Labs Virtual Labs Scientific Data Scientific Data Scientific Data Grid Grid Grid Economies of Scale EfficiencyGains Network Network Network Global Virtual Research Communities Scientific Data Grid Network
  • 12. Data and information explosion Petascale computing produces exascale data
  • 13. HPC is a part of a larger ecosystem DISCIPLINARIES, USER COMMUNITIES COMPETENCE SOFTWARE DEVELOPMENT DATA INFRASTRUCTURES AND SERVICES HPC AND GRID INFRASTRUCTURES
  • 14. HPC Ecosystem to support the top The upper layers of the pyramid HPC centers / services European projects (HPC/Grid, networking, …) Activities which enable efficient usage of upper layers Inclusion of national HPC infrastructures Software development and scalability issues Competence development Interoperability between the layers
  • 16. Stakeholder categories in PRACE Providers of HPC services European HPC and grid projects Networking infrastructure providers Hardware vendors Software vendors and the software developing academic community End users and their access through related Research Infrastructures Funding bodies on a national and international level Policy setting organisations directly involved in developing the research infrastructure and political bodies like parliaments responsible for national and international legislation
  • 17. Policy and strategy work HET: HPC in Europe Taskforce http://www.hpcineuropetaskforce.eu/ e-IRG: e-Infrastructure Reflection Grouphttp://www.e-irg.org/ ESFRI: European Strategy Forum on Research Infrastructureshttp://www.cordis.lu/esfri/ ERA Expert Group on Research Infrastructures ESFRI
  • 18. Some focus areas Collaboration between research and e-infrastructure providers Horizontal ICT services Balanced approach: more focus on data, software development and competence development Inclusion of different countries, different contribution levels New emerging technologies, innovative computing initiatives Global collaboration, for example Exascale computing initiative Policy work, resource exchange, sustainable services etc.
  • 19. On-going Grid and HPC activities
  • 20. EU infrastructure projects GEANT Number of data infrastructure projects
  • 21.
  • 22. 22 Supercomputing Drives Science through Simulation Environment Weather/ Climatology Pollution / Ozone Hole Ageing Society Medicine Biology Energy Plasma Physics Fuel Cells Materials/ Inf. Tech Spintronics Nano-science
  • 23. 23 PRACE Initiative History and First Steps Production of the HPC part ofthe ESFRI Roadmap;Creation of a vision, involving 15 European countries Signature of the MoU Submission ofan FP7 project proposal Bringing scientists together Creation of the Scientific Case Approval of the project Project start HPCEUR HET 2004 2005 2006 2007 2008
  • 24. 24 HET: The Scientific Case Weather, Climatology, Earth Science degree of warming, scenarios for our future climate. understand and predict ocean properties and variations weather and flood events Astrophysics, Elementary particle physics, Plasma physics systems, structures which span a large range of different length and time scales quantum field theories like QCD, ITER Material Science, Chemistry, Nanoscience understanding complex materials, complex chemistry, nanoscience the determination of electronic and transport properties Life Science system biology, chromatin dynamics, large scale protein dynamics, protein association and aggregation, supramolecular systems, medicine Engineering complex helicopter simulation, biomedical flows, gas turbines and internal combustion engines, forest fires, green aircraft, virtual power plant
  • 25.
  • 26. attractiveness for researchers
  • 27.
  • 28. 27 Third success: the PRACE project Partnership for Advanced Computing in Europe PRACE EU Project of the European Commission 7th Framework Program Construction of new infrastructures - preparatory phase FP7-INFRASTRUCTURES-2007-1 Partners are 16 Legal Entities from 14 European countries Budget: 20 Mio € EU funding: 10 Mio € Duration: January 2008 – December 2009 Grant no: RI-211528
  • 30. PRACE Work Packages WP1 Management WP2 Organizational concept WP3 Dissemination, outreach and training WP4 Distributed computing WP5 Deployment of prototype systems WP6 Software enabling for prototype systems WP7 Petaflop systems for 2009/2010 WP8 Future petaflop technologies 29
  • 31. 30 PRACE Objectives in a Nutshell Provide world-class systems for world-class science Create a single European entity Deploy 3 – 5 systems of the highest performance level (tier-0) Ensure diversity of architectures Provide support and training PRACE will be created to stay
  • 32. 31 Representative Benchmark Suite Defined a set of applications benchmarks To be used in the procurement process for Petaflop/s systems 12 core applications, plus 8 additional applications Core:NAMD, VASP, QCD, CPMD, GADGET, Code_Saturne, TORB, ECHAM5, NEMO, CP2K, GROMACS, N3D Additional: AVBP, HELIUM, TRIPOLI_4, PEPC, GPAW, ALYA, SIESTA, BSIT Each application will be ported to appropriate subset of prototypes Synthetic benchmarks for architecture evaluation Computation, mixed-mode, IO, bandwidth, OS, communication Applications and Synthetic benchmarks integrated into JuBE Juelich Benchmark Environment
  • 33.
  • 34. Based on the application analysis - expressed in a condensed, qualitative way
  • 35. Need for different “general purpose” systems
  • 36. There are promising emerging architectures
  • 37. Will be more quantitative after benchmark runs on prototypes E = estimated
  • 38. 33 Installed prototypes IBM BlueGene/P (FZJ) 01-2008 IBM Power6 (SARA) 07-2008 Cray XT5 (CSC) 11-2008 IBM Cell/Power (BSC) 12-2008 NEC SX9, vector part (HLRS) 02-2009 Intel Nehalem/Xeon (CEA/FZJ)06-2009 33
  • 39. 34 Status June 2009 34 Summary of current Prototype Status
  • 40. 35 Web site and the dissemination channels The PRACE web presence with news, events, RSS feeds etc. http://www.prace-project.eu Alpha-Galileo service: 6500 journalists around the globe: http://www.alphagalileo.org Belief Digital Library HPC-magazines PRACE partner sites, top 10 HPC users The PRACE website, www.prace-project.eu
  • 41. 36 PRACE Dissemination Package PRACE WP3 has created a dissemination package including templates, brochures, flyers, posters, badges, t-shirts, USB-keys, badges etc. The PRACE logo Heavy Computing 10^15: the PRACE t-shirt PRACE USB-key
  • 42. 37
  • 43. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 38 DEISA: May 1st, 2004 – April 30th, 2008 DEISA2: May 1st, 2008 – April 30th, 2011 DEISA Partners
  • 44. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 39 DEISA Partners BSCBarcelona Supercomputing Centre Spain CINECA Consortio Interuniversitario per il Calcolo Automatico Italy CSCFinnish Information Technology Centre for Science Finland EPCCUniversity of Edinburgh and CCLRC UK ECMWFEuropean Centre for Medium-Range Weather Forecast UK (int) FZJ Research Centre Juelich Germany HLRS High Performance Computing Centre Stuttgart Germany IDRIS Institut du Développement et des Ressources France en Informatique Scientifique - CNRS LRZ Leibniz Rechenzentrum Munich Germany RZG Rechenzentrum Garching of the Max Planck Society Germany SARADutch National High Performance Computing Netherlands CEA-CCRT Centre de Calcul Recherche et Technologie, CEA France KTH Kungliga Tekniska Högskolan Sweden CSCSSwiss National Supercomputing Centre Switzerland JSCCJoint Supercomputer Center of the Russian Russia Academy of Sciences
  • 45. DEISA 2008 Operating the European HPC Infrastructure >1 PetaFlop/s Aggregated peak performance Most powerful European supercomputers for most challenging projects Top-level Europe-wide application enabling Grand Challenge projects performed on a regular basis
  • 46. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 41 DEISA Core Infrastructure and Services Dedicated High Speed Network Common AAA Single sign on Accounting/budgeting Global Data Management High performance remote I/O and data sharing with global file systems High performance transfers of large data sets User Operational Infrastructure Distributed Common Production Environment (DCPE) Job management service Common user support and help desk System Operational Infrastructure Common monitoring and information systems Common system operation Global Application Support
  • 47. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 42 FUNET SURFnet DFN UKERNA GARR RENATER 1 Gb/s GRE tunnel 10 Gb/s wavelength 10 Gb/s routed 10 Gb/s switched RedIris DEISA dedicated high speed network
  • 48. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 43 Super-UXNQS II DEISA Global File System(based on MC-GPFS) IBM P6 (& BlueGene/P) NEC SX8 IBM P6 (& BlueGene/P) AIXLL-MC AIXLL-MC GridFTP IBM P6 Cray XT4 AIXLL UNICOS/lc PBS Pro UNICOS/lc PBS Pro Cray XT4 & XT5 SGI ALTIX LINUX PBS Pro AIXLL-MC AIXLL-MC IBM P5 LINUX Maui/Slurm LINUX LL IBM P6 & BlueGene/P IBM P6 IBM PPC
  • 49. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 44 Multiple ways to access Presen-tation layer Common production environm. Single monitor system Co-reservation and co-allocation Job manag. layer and monitor. DEISA Sites Data transfer tools Job rerouting Workflow managem. Data staging tools WAN shared filesystem Data manag. layer Unified AAA Network connec-tivity Network and AAA layers DEISA Software Layers
  • 50. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 45 Supercomputer Hardware Performance Pyramid Supercomputer Application Enabling Requirements Pyramid Capability computing will always need expert support for application enabling and optimizations The more resource demanding one single problem is, the higher are generally the requirements for application enabling including enhancing scalability EU National Local
  • 51. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 46 DEISA Organizational Structure WP1 – Management WP2 – Dissemination, External Relations, Training WP3 – Operations WP4 – Technologies WP5 – Applications Enabling WP6 – User Environment and Support WP7 – Extreme Computing (DECI) and Benchmark Suite WP8 – Integrated DEISA Development Environment WP9 – Enhancing Scalability
  • 52. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 47 Evolution of Supercomputing Resources DEISA partners´ compute resources at DEISA project start: ~ 30 TF aggregated peak performance 2004 DEISA partners´ resources at DEISA2 project start: Over 1 PF aggregated peak performance on state-of-the art supercomputers 2008 Cray XT4 and XT5, Linux IBM Power5, Power6, AIX / Linux IBM BlueGene/P, Linux (frontend) IBM PowerPC, Linux (MareNostrum) SGI ALTIX 4700 (Itanium2 Montecito), Linux NEC SX8 vector system, Super UX Systems interconnected with dedicated 10Gb/s network links provided by GEANT2 and NRENs Fixed fraction of resources dedicated to DEISA usage
  • 53. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 48 DEISA Extreme Computing Initiative (DECI) DECI launched in early 2005 to enhance DEISA’s impact on science and technology Identification, enabling, deploying and operation of “flagship” applications in selected areas of science and technology Complex, demanding, innovative simulations requiring the exceptional capabilities of DEISA Multi-national proposals especially encourage Proposals reviewed by national evaluation committees Projects chosen on the basis of innovation potential, scientific excellence, relevance criteria, and national priorities Most powerful HPC architectures in Europe for the most challenging projects Most appropriate supercomputer architecture selected for each project Mitigation of the rapid performance decay of a single national supercomputer within its short lifetime cycle of typically about 5 years, as implied by Moore’s law
  • 54. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 49 DEISA Extreme Computing Initiative Involvements in projects from DECI calls 2005, 2006, 2007: 157 research institutes and universities from 15 European countries Austria Finland France Germany Hungary Italy Netherlands Poland Portugal Romania Russia Spain Sweden Switzerland UK with collaborators from four other continents North America, South America, Asia, Australia
  • 55. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 50 DEISA Extreme Computing Initiative Calls for Proposals for challenging supercomputing projects from all areas of science DECI call 2005 51 proposals, 12 European countries involved, co-investigator from US) 30 mio cpu-h requested 29 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+) DECI call 2006 41 proposals, 12 European countries involved co-investigators from N + S America, Asia (US, CA, AR, ISRAEL) 28 mio cpu-h requested 23 proposals accepted, 12 mio cpu-h awarded (normalized to IBM P4+) DECI call 2007 63 proposals, 14 European countries involved, co-investigators from N + S America, Asia, Australia (US, CA, BR, AR, ISRAEL, AUS) 70 mio cpu-h requested 45 proposals accepted, ~30 mio cpu-h awarded (normalized to IBM P4+) DECI call 2008 66 proposals, 15 European countries involved, co-investigators from N + S America, Asia, Australia 134 mio cpu-h requested (normalized to IBM P4+) Evaluation in progress
  • 56. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 51 DECI Project POLYRES Cover Story of Nature - May 24, 2007 Curvy membranes make proteins attractive For almost two decades, physicists have been on the track of membrane mediated interactions. Simulations in DEISA have now revealed that curvy membranes make proteins attractive. Nature 447 (2007), 461-464 proteins (red) adhere on a membrane (blue/yellow) and locally bend it; this triggers a growing invagination. cross-section through an almost complete vesicle B. J. Reynwar et al.: Aggregation and vesiculation of membrane proteins by curvature mediated interactions, NATURE Vol 447|24 May 2007| doi:10.1038/nature05840
  • 57. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 52 Achievements and Scientific Impact Brochures can be downloaded from http://www.deisa.eu/publications/results
  • 58. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 53 2003 2004 2005 2006 2007 2008 2009 2010 2011 2002 Evolution of User Categories in DEISA Start of FP7 DEISA2 Start of FP6 DEISA DEISA EoI Support of Virtual Communities and EU projects Single project support DEISA Extreme Computing Initiative Early adopters (Joint Research Activities) FP6 DEISA FP7 DEISA2 Preparatory Phase
  • 59. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 54 Tier0 / Tier1 CentersAre there implications for the services? Main difference between T0 and T1 centers: policy and usage models ! T1 centers can evolve to T0 for strategic/political reasons T0 machines automatically degrade to T1 level by aging T0 Centers Leadership-class European systems in competition to the leading systems worldwide, cyclically renewed Governance structure to be provided by European organization(PRACE) T1 Centers Leading national Centers, cyclically renewed, optionally surpassing the performance of older T0 machines National Governance structure Services have to be the same in T0/T1 Because of the change of the status of the systems, over time For user transparency of the different systems (Only visible: Some services could have different flavors for T0 and T1)
  • 60. SC'08 Austin 2008-11-19 Andreas Schott, DEISA 55 Summary Evolvement of this European infrastructure towards a robust and persistent European HPC ecosystem Enhancing the existing services, by deploying new services including support for European Virtual Communities, and by cooperating and collaborating with new European initiatives, especially PRACE DEISA2 as the vector for the integration of Tier-0 and Tier-1 systems in Europe To provide a lean and reliable turnkey operational solution for a persistent European HPC infrastructure Bridging worldwide HPC projects: To facilitate the support of international science communities with computational needs traversing existing political boundaries
  • 61.
  • 62. Number of countries connected to the EGEE infrastructure: 54
  • 63. Number of CPUs (cores) available to users 24/7: ~139,000
  • 64.
  • 65. Number of registered Virtual Organisations: >112
  • 66. Number of registered users: > 13000
  • 67. Number of people benefiting from the existence of the EGEE infrastructure: ~20000
  • 68. Number of jobs: >390k jobs/day
  • 69.
  • 70. www.eu-egi.org 58 38 National Grid Initiatives
  • 71. www.eu-egi.org 59 EGI Objectives (1/3) Ensure the long-term sustainability of the European infrastructure Coordinate the integration and interaction between National Grid Infrastructures Operate the European level of the production Grid infrastructure for a wide range of scientific disciplines to link National Grid Infrastructures Provide global services and support that complement and/or coordinate national services Collaborate closely with industry as technology and service providers, as well as Grid users, to promote the rapid and successful uptake of Grid technology by European industry
  • 72. www.eu-egi.org 60 EGI Objectives (2/3) Coordinate middleware development and standardization to enhance the infrastructure by soliciting targeted developments from leading EU and National Grid middleware development projects Advise National and European Funding Agencies in establishing their programmes for future software developments based on agreed user needs and development standards Integrate, test, validate and package software from leading Grid middleware development projects and make it widely available
  • 73. www.eu-egi.org 61 EGI Objectives (3/3) Provide documentation and training material for the middleware and operations. Take into account developments made by national e-science projects which were aimed at supporting diverse communities Link the European infrastructure with similar infrastructures elsewhere Promote Grid interface standards based on practical experience gained from Grid operations and middleware integration activities, in consultation with relevant standards organizations EGI Vision Paper http://www.eu-egi.org/vision.pdf
  • 74. Integration and interoperability PRACE and EGI targeting a sustainable infrastructure DEISA-2 and EGEE-III project based Sometimes national stakeholders are partners in multiple initiatives Users do not necessarily care where they get the service as long as they get it Integration PRACE-DEISA and transition EGEE-EGI possible, further on requires creative thinking
  • 75. New HPC Ecosystem is being built…
  • 76. New market for European HPC 44 ESFRI list new research infrastructure projects, 34 running a preparatory phase project 1-4 years 1-7 MEUR * 2 (petaflop computing 10 MEUR * 2) Successful new research infrastructures start construction 2009-2011 10-1000 MEUR per infrastructure First ones start to deploy: ESS in Lund etc. Existing research infrastructures are also developing CERN, EMBL, ESA, ESO, ECMWF, ITER, … Results: Growing RI market, considerably rising funding volume Need for horizontal activities (computing, data, networks, computational methods and scalability, application development,…) Real danger to build disciplinary silos instead of searching IT synergy Several BEUR for ICT
  • 77. Some Key Issues in building the ecosystem Sustainability EGEE and DEISA are projects with an end PRACE and EGI are targeted to be sustainable with no definitive end ESFRI and e-IRG How do the research side and infrastructure side work together? Two-directional input requested Requirement for horizontal services Let’s not create disciplinary IT silos Synergy required for cost efficiency and excellence ICT infrastructure is essential for research The role of computational science is growing Renewal and competence Will Europe run out of competent people? Will training and education programs react fast enough?
  • 78. Requirements of a sustainable HPC Ecosystem How to guarantee access to the top for selected groups? How to ensure there are competent users which can use the high end resources? How to involve all countries who can contribute? How to develop competence in home ground? How to boost collaboration between research and e-infrastructure providers? What are the principles of resource exchange (in-kind)? European centers National /regional centers, Grid-collaboration Universities and local centers
  • 80. Some conclusions There are far too many acronyms in this field We need to collaborate in providing e-infrastructure From disciplinary silos to horizontal services Building trust between research and service providers Moving from project based work to sustainable research infrastructures Balanced approach: focus not only on computing but also on data, software development and competence Driven by user community needs – technology is a tool, not a target ESFRI list and other development plans will boost the market of ICT services in research Interoperability and integration of initiatives will be seriously discussed
  • 81. Final words to remember “The problems are not solved by computers nor by any other e-infrastructure, they are solved by people” Kimmo Koski, today