SlideShare a Scribd company logo
1 of 15
Download to read offline
INDUSTRY DEVELOPMENTS AND MODELS

                                                               How Nations Are Applying High-End Petascale
                                                               Supercomputers for Innovation and Economic
                                                               Advancement in 2012
                                                               Earl C. Joseph, Ph.D.                  Chirag Dekate, Ph.D.
                                                               Steve Conway


                                                               IDC OPINION
www.idc.com




                                                               There is a growing contingent of nations around the world that are investing
                                                               considerable resources to install supercomputers with the potential to reshape how
                                                               science and engineering are accomplished. In many cases, the cost of admission for
                                                               each supercomputer now exceeds $100 million and could soon reach $1 billion.
F.508.935.4015




                                                               Some countries are laying the groundwork to build "fleets" of these truly large-scale
                                                               computers to create a sustainable long-term competitive advantage. In a growing
                                                               number of cases, it is becoming a requirement to show or "prove" the return on
                                                               investment (ROI) of the supercomputer and/or the entire center. With the current
                                                               global economic pressures, nations need to better understand the ROI of making
P.508.872.8200




                                                               these large investments and decide if even larger investments should be made. It's
                                                               not an easy path, as there are sizable challenges in implementing these truly super
                                                               supercomputers:

                                                                Along with the cost of the computers, there are substantial costs for power,
Global Headquarters: 5 Speen Street Framingham, MA 01701 USA




                                                                 cooling, and storage.

                                                                In most cases, new datacenters have to be built, and often entire new research
                                                                 organizations are created around the new or expanded mission.

                                                                In many situations, the application software needs to be rewritten or even
                                                                 fundamentally redesigned in order to perform well on these systems.

                                                                But the most important ingredient is still the research team. The pool of talent
                                                                 that can take advantage of these systems is very limited, and competition is
                                                                 heating up to attract them.




                                                               Filing Information: August 2012, IDC #236341, Volume: 1
                                                               Technical Computing: Industry Developments and Models
IN THIS STUDY
This study explores the potential impact of the latest generation of large-scale
supercomputers that are just now becoming available to researchers around the
world. It also provides a list of the current petascale-class supercomputers around the
world. These systems, along with others planned to be installed over the next few
years, create an opportunity to substantially change how scientific research is
accomplished in many fields.

IDC has conducted a number of studies around petascale initiatives, petascale
applications, buyer requirements, and what will be required to compete in the future.
IDC has also worked with the U.S. Council on Competitiveness on a number of HPC
studies, and the council has accurately summarized the situation with the phrase that
companies and nations must "outcompute to outcompete."



SITUATION OVERVIEW
Over the past five years, there has been a race around the world to build
supercomputers with peak and actual (sustained) performance of 1PFLOPS (a
petaflop equals a computer that can calculate a million million operations every
second). The next-generation supercomputers are targeting 1,000 times more
performance, called exascale computing (10^18 operations per second), starting in
2018. Currently, systems in the 10–20PFLOPS range are being installed, and
100PFLOPS systems are expected by 2015.

Already, the petascale and near-petascale level of performance is allowing science of
unprecedented complexity to be accomplished on a computer:

 Scientific research beyond what is feasible in a physical laboratory is becoming
  possible for a larger set of researchers.

 First principles–based science is becoming possible in more disciplines.

 Research based on the nano level and at the molecular level is becoming possible.

        Future research at an atomic level will soon become possible for designing
         many everyday items.

 Analysis via simulation instead of live experiments will be possible on a greatly
  increased scale.

In the United States, the race for petascale supercomputers was started when
DARPA launched a multiphase runoff program to design innovative, sustained
petascale-class supercomputers that also addressed a growing concern around user
productivity. The great unanswered (and poorly addressed) question in the high-end
HPC market is: how can a broad set of users actually make use of the new
generation of computers that are so highly parallel? Some of the complexities of this
problem are:

 Processor speeds in general are not getting faster, so users have to find ways to
  use more processors to obtain speedups.


©2012 IDC                                     #236341                                     1
 The memory bandwidth in and out of each processor core (and the per-core
  memory size) is declining at an alarming rate and is projected to continue to
  decline over the foreseeable future.

 Many user applications were designed and/or written many years ago, often over
  20 years ago. They tended to be designed around a single processor with strong
  memory access capabilities (e.g., vector computers). These codes have been
  enhanced and improved over the years, but in most cases, they need to be
  fundamentally redesigned to work at scale on the newest supercomputers.

The current situation is opening the door for new applications that can take advantage
of the latest computers. Many current applications have a more difficult time going
through this level of transformation because of the lack of talent, the steep costs of
redesigns, the need to bring all of the legacy features forward, and an unclear
business model. A debate is growing about whether a more open source application
model may work in many areas.


The Use of Higher-End Computer Simulation
Model–Based Science

Figure 1 shows that the use of supercomputer-class HPC systems was roughly stable
(from a revenue perspective) up to 2008; then starting in 2009, major growth began.
IDC defines a supercomputer as a technical server that costs over $500,000. The
strongest growth has been for supercomputers with a price over $3 million.
Supercomputer revenue was running at around $4.5 billion a year in 2011.

Figure 2 shows IDC's forecast supercomputer revenue for 2006–2016. This portion of
the HPC market is expected to grow to over $5 billion in a few years and get close to
$6 billion by 2016. Note that with system prices growing at the top of the market,
yearly revenue will likely show major swings up and down, as having singular sales
near $1 billion can have major impacts (e.g., two $1 billion sales in a single year is
over a third of the entire market sector).

IDC expects that the sales of very large HPC systems, those selling for over $50
million, will likely increase over the next few years, as more countries invest in large
simulation capabilities. In some countries, this may reduce their spending on more
moderate-size systems as they pool resources to invest in a few very large systems.
One example is the new K system in Japan that has a price of over $500 million.
There is a strong possibility that multiple HPC systems costing close to $1 billion will
be built over the next five years.




2                                             #236341                                      ©2012 IDC
FIGURE 1

Worldwide Supercomputer Technical Server Revenue,
2006–2011




Source: IDC, 2012




FIGURE 2

Worldwide Supercomputer Technical Server Revenue,
2006–2016




Source: IDC, 2012




©2012 IDC                      #236341              3
Petascale Supercomputers and Their Planned
Uses (as of August 2012)

There are currently 20 known petascale-class supercomputers in the world:

 The top-ranked 20-petaflop Sequoia system, hosted at LLNL, is used to ensure
  the safety and reliability of the U.S. nuclear weapons stockpile and advancing
  research in various domains including astronomy, energy, human genome
  science, and climate change.

 The 11.28-petaflop K computer hosted at RIKEN is planned to be used for
  climate research, disaster prevention, and medical research.

 The Mira supercomputer hosted at ANL, providing over 10.06 petaflops of
  compute capability, is devoted entirely to open science applications such as
  climate studies, engine design, cosmology, and battery research.

 The 3.18-petaflop SuperMUC supercomputer hosted at Leibniz-Rechenzentrum
  is scheduled to be used for three key disciplines — astrophysics, engineering
  and energy, and chemistry and materials research.

 The Tianhe-1A supercomputer at the National Supercomputing Center in Tianjin,
  China, providing over 4.70 petaflops of compute capability, has been used to
  execute record-breaking scientific simulations to further research in solar energy
  and a petascale molecular dynamics simulation that utilizes both CPUs and
  GPUs.

 At ORNL, the Jaguar supercomputer system, which currently provides 2.63
  petaflops of computation capability, has been used for a diverse range of
  simulations, from probing the potential for new energy sources to dissecting the
  dynamics of climate change to manipulating protein functions. Jaguar has also
  been used in studying complex problems including supernovae, nuclear fusion,
  photosynthesis research enabling next generation of ethanol, and complex
  ecological phenomena.

 Molecular dynamics, protein simulations, quantum chemistry and molecular
  dynamics, engineering and structural simulations, and astrophysics are some of
  the applications that are being ported to the 2.09-petaflop FERMI supercomputer
  hosted at CINECA (Italy).

 The 1.68-petaflop JUQUEEN supercomputer at Forschungszentrum Jülich is
  planned to be used for a broad range of applications including computational
  plasma physics, protein folding, quantum information processing, and pedestrian
  dynamics.

 French researchers have simulated the first-ever simulation of the entire
  observable universe from Big Bang to the present day comprising over 500 billion
  particles on the 1.67-petaflop CURIE supercomputer operated by Grand
  Equipement National de Calcul Intensif (GENCI) at the CEA in France.




4                                           #236341                                    ©2012 IDC
 The 2.98-petaflop Nebulae supercomputer hosted at the National
  Supercomputing Center in Shenzhen, China, has been used in geological
  simulations including oil exploration, space exploration, and avionics, among
  other large-scale traditional scientific and engineering simulations.

 The Pleiades supercomputer at NASA Ames, which provides over 1.73 petaflops
  of compute capability, is being applied to understand complex problems like
  calculation of size, orbit, and location of planets surrounding stars; research and
  development of next-generation space launch vehicles; astrophysics simulations;
  and climate change simulations.

 The 1.52-petaflop Helios supercomputer at the International Fusion Energy
  Research Center, Japan, is dedicated to researching and advancing fusion
  energy research as part of the ITER project and is used to model the behavior of
  plasma and ultra-high-temperature ionized gas in intensive magnetic fields and
  design materials that are subjected to extreme temperatures and pressures.

 The 1.47-petaflop Blue Joule supercomputer hosted at the Science and
  Technology Facilities Council in Daresbury, the United Kingdom, is being used to
  develop a next-generation weather forecasting model for the United Kingdom
  that would simulate the winds, temperature, and pressure that, in combination
  with dynamic processes such as cloud formation, would allow for the simulation
  of changing weather conditions.

 The TSUBAME2.0 supercomputer at the Tokyo Institute of Technology, which
  enables over 2.29 petaflops of capability, has been used to run a Gordon Bell
  award-winning simulation involving complex dendritic structures, utilizing over
  16,000 CPUs and 4,000 GPUs.

 The 1.37-petaflop Cielo supercomputer hosted at Los Alamos National
  Laboratory enables the scientists to increase their understanding of complex
  physics and improve confidence in the predictive capability for stockpile
  stewardship. Cielo is primarily utilized to perform milestone weapons
  calculations.

 The Cray XT5 Hopper supercomputer at Lawrence Berkeley National Lab is
  capable of 1.29 petaflops and has been used to process over 3.5 terabases of
  genome sequence data comprising over 5,000 samples from 242 people,
  requiring over 2 million computer hours to complete the project.

 The 1.25-petaflop Tera 100 supercomputer hosted at the CEA in France has
  been used for solving problems in diverse domains including astrophysics, life
  sciences, plasma accelerators, seismic simulations, molecular dynamics, and
  defense applications.

 The SPARC64-based 1.04-petaflop Oakleaf-FX supercomputer in Japan is
  planned to be used for a wide variety of simulations, including earth sciences,
  weather modeling, seismology, hydrodynamics, biology, materials science,
  astrophysics, solid state physics, and energy simulations.




©2012 IDC                                    #236341                                    5
 The 1.26-petaflop DiRAC supercomputer, based on the BlueGene/Q architecture
  and hosted at University of Edinburgh, has been used to study complex
  astrophysics simulations, including physics of dark matter, and computational
  techniques like lattice quantum chromodynamics.

 IBM's "Roadrunner," the world's first petascale supercomputer, is being used
  primarily to ensure the safety and reliability of America's nuclear weapons
  stockpile but also to tackle problems in astronomy, energy, human genome
  science, and climate change.

In addition, the United States, China, Russia, the European Union (EU), Japan, India,
and others are investing in several petascale systems (10–100 petaflops) in
preparation to reach the exascale dream.


Petascale Applications in Use Today

Material science and nanotechnology typically rank high on the list of petascale
candidate applications because they can never get enough computing power.
Weather forecasting and climate science also have insatiable appetites for compute
power, as do combustion modeling, astrophysics, particle physics, drug discovery,
and newer fields including fusion energy research. Scientists today are performing ab
initio calculations with a few thousand atoms, but they would like to increase that to
hundreds of thousands of atoms.

Note: Ab initio quantum chemistry methods are computational chemistry methods
based on quantum chemistry. The term ab initio indicates that the calculation is from
first principles and that no empirical data is used.

Additional intriguing examples include real-time magnetic resonance imaging (MRI)
during surgery, cosmology, plasma rocket engines for space exploration, functional
brain modeling, design of advanced aircraft and, last but hardly least, severe weather
prediction.

Table 1 shows a broad landscape of the current petascale research areas. Many of
these are already being planned for running at the exascale level.




6                                            #236341                                     ©2012 IDC
TABLE 1

 Current Petascale Research Areas

 Sector                               Application Description                   Potential Breakthrough/Value of the Work

 Aerospace                            Advanced aircraft, spacecraft design      Improved safety and fuel efficiency

 Alternative energy                   Identify and test newer energy sources    National energy security

 Automotive                           Full fidelity crash testing               Improved safety, reduced noise and fuel
                                                                                consumption

 Automotive                           True computational steering               Cars that travel 150,000mi without repairs

 Business intelligence                More complex analytics algorithms         Improved business decision making and
                                                                                management

 Electrical power grid distribution   Track and distribute power                Better distribution efficiency and response
                                                                                to changing needs

 Global climate modeling              High-resolution climate models            Improved response to climate change

 Cryptography and signal              Process digital code and signals          Improved national defense capabilities
 processing

 DNA sequence analysis                Full genome analysis and comparison       Person-specific drugs
                                                                                (pharmacogenetics)

 Digital brain and heart              High-resolution, functional models        New disease-fighting capabilities

 Haptics                              Simulate sense of touch                   Improved surgical training and planning

 Nanotechnology                       Model materials at nanoscale              Pioneering new materials for many fields

 National-scale economic modeling     Model national economy                    Better economic planning and
                                                                                responsiveness

 Nuclear weapons stewardship          Test stability, functionality (nonlive)   Ensure weapons safety and
                                                                                preparedness

 Oil and gas                          Improved seismic, reservoir simulation    Improved discovery and extraction of oil
                                                                                and gas resources

 Pandemic diseases, designer          Identify the nature of and response to    Combat new disease strains and
 plagues                              diseases                                  bioterrorism

 Pharmaceutical                       Virtual surgical planning                 Greatly reduce health costs while greatly
                                                                                increasing the quality of healthcare

 Real-time medical imaging            Process medical images without delay      Improved patient care and satisfaction

 Weather forecasting                  High-resolution forecast models           Better prediction of severe storms
                                                                                (reducing damage to lives and property)

 Source: IDC, 2012




©2012 IDC                                         #236341                                                              7
Scientific Breakthroughs Possible with Large-Scale HPC Systems

 Simulating about 100 mesocircuits using current simulation capabilities (With
  larger-scale systems and supporting computational models, researchers can
  potentially simulate over 1,000 times the current cortical simulators to develop a
  first simulation matching the scale of a human brain.)

 High-fidelity combustion simulations, which require extreme-scale computing to
  predict the behavior of alternative fuels in novel fuel-efficient, clean engines and
  so facilitate the design of optimal combined engine-fuel system

 extreme-scale systems that can be used to improve the predictive integrated
  modeling of nuclear systems:

        Capability to simulate multicomponent chemical reacting solutions

        Computations and simulations of 3D plant design

 Multiscale modeling and simulation of combustion at exascale, which is essential
  for enabling innovative prediction and design; extreme-scale systems that
  enable:

        Design and optimization of diesel/HCCI engines, in geometry with surrogate
         large-molecule fuels representative of actual fuels

        Direct numerical simulation of turbulent jet flames at engine pressures (30–
         50atm) with isobutanol (50–100 species) for diesel or HCCI engine
         thermochemical conditions

        Full-scale molecular dynamics simulations with on-the-fly ab initio force field
         to study the combined physical and chemical processes affecting
         nanoparticle and soot formation processes

 Improvement in scalability of applications to enable multiscale materials design,
  critical for improving the efficiency and cost of photovoltaics:

        Extreme-scale computations that help accurately predict the combined effect
         of multiple interacting structures and their effects upon electronic excitation
         and ultimate performance of a photovoltaic cell

 Modeling the entire U.S. power grid from every household power meter to every
  power-generation source, with feedback to all users on how to conserve energy,
  and then testing new alternative energy solutions against this "full" model of the
  whole U.S. power grid

 Modeling the evolution possibilities for DNA — going out in long time frames
  (e.g., several million years) — and investigating the potential impact this could
  have on human physiology

 Extreme-scale systems to enable full quantum simulations that are required for
  efficient storage of energy in chemical bonds found in catalysts (Current systems
  can simulate up to 1,000 atoms at 3nm resolution. Higher-resolution simulations



8                                             #236341                                      ©2012 IDC
on the order of 6nm with 10,000 atoms are needed to effectively conduct
    multiscale models.)

 Modeling of the entire world's economy to first obtain the ability to see problems
  before they happen and then the ability to pretest solutions to better understand
  their likely effects and eventually find ways to assist the economies and avoid
  severe downturns

 The design of automobiles with 150,000mi reliability, with no failures

 Designing drugs that are directly based on each individual's DNA, via simulation
  of protein folding for both disease control and specific drug design

 Designing solar panels that are at least 50% efficient and are low cost to
  produce; for example, a 10 x 20ft panel at consumer prices to be purchased at
  consumer home improvement stores and no more difficult to install than a lawn
  sprinkler system, where a few of these panels can generate more electricity than
  is needed for a house and two cars

 Full airflow around an entire vehicle like a helicopter that is accurate enough to
  remove the need for any wind tunnel testing

 Petascale systems and beyond to also help in parallel direct numerical simulation
  of combustion in a low-temperature, high-pressure environment shedding light on
  how jet flames ignite and become stable in diesel environments (The systems
  can also accelerate the development of predictive models for clean and efficient
  combustion of new transportation fuels.)

 Simulations to help investigate Cooper pairs, the electron pairings that transform
  materials into superconductors (Superconducting research can be accelerated to
  help better understand the difference in temperature at which various materials
  become superconductors, with the goal of developing superconductors that do
  not require cooling.)



FUTURE OUTLOOK

HPC and the Balance of Power
Between Nations

The competition among nations for economic, scientific, and military leadership and
advantage is increasingly being driven by computational capabilities. Very large-scale
HPC systems will tend to push nations that employ them ahead of those that don't. A
set of petascale supercomputers, along with the researchers to use them, can
reshape how a nation advances itself in both scientific innovation and economic
industrial growth. This is why countries/regions like the United States, China, the
European Union, Japan and, even, Russia are working to install multiple petascale
computers with an eye on exascale in the future.




©2012 IDC                                    #236341                                     9
Some immediate advantages of installing a fleet of petascale supercomputers in a
country are:

 It tends to keep the top scientists, engineers, and researchers within your
  country.

 It helps attract top scientists and engineers around the world to your country.

 It motivates more students within your country to go into scientific and
  engineering disciplines — building the intellectual base for the future.

The major mid- to longer-term advantages include:

 Increasing the economic size (GDP) and growth of a nation by having more
  competitive products and services, faster time to market, and the ability to
  manufacture products that other nations can't

 Increasing the capability of homeland defense and military security — nations
  without this scale of computational power will more often find themselves asking:
  What happened?


The Need for Better Measurement of the ROI from HPC Investments

In a growing number of cases, it is becoming a requirement to show or "prove" the
return on investment of the supercomputer and/or the entire center. With the current
global economic pressures, nations need to better understand the ROI of making
these large investments and decide if even larger investments should be made. IDC
is researching new ways to better measure and report the ROI with HPC for both
science and industry. For science, ROI can be measured by focusing on innovation
and discovery, while for industry, it's focused on dollars (profits, revenues, or lower
costs) and making better products/services. The new IDC HPC innovation award
program is one way for collecting a broad set of ROI examples.



ESSENTIAL GUIDANCE
Petascale initiatives will benefit a substantial number of users. Although the initial
crop of petascale systems and their antecedents are still only a few in number, many
thousands of users will have access to these systems (e.g., the U.S. Department of
Energy's INCITE program and analogous undertakings). Industrial users will also gain
access to these systems for their most advanced research.

Petascale and exascale initiatives will intensify efforts to increase scalable application
performance. Sites that are advancing in phases toward petascale/exascale systems,
often in direct or quasi competition with other sites for funding, will be highly
motivated to demonstrate impressive performance gains that may also result in
scientific breakthroughs. To achieve these performance gains, researchers will need
to intensify their efforts to boost scalable application performance by taming large-
scale, many-core parallelism. These efforts will then benefit the larger HPC
community. Over time, it will likely also benefit the computer industry as a whole.




10                                              #236341                                      ©2012 IDC
Some important applications will likely be left behind. Petascale initiatives will not
benefit applications that do not scale well on today's HPC systems, unless these
applications are fundamentally rewritten. Rewriting is an especially daunting and
costly proposition for some important engineering codes that require a decade or
more to certify and incrementally enhance.

Some petascale developments and technologies will likely "trickle down" to the
mainstream HPC market, while others may not. Evolutionary improvements,
especially in programming languages and file systems, will be most readily accepted
by mainstream HPC users but may be inadequate for exploiting the potential of
petascale/exascale systems. Conversely, revolutionary improvements (e.g., new
programming languages) that greatly facilitate work on petascale systems may be
rejected by some HPC users as requiring too painful of a change. The larger issue is
whether petascale developments will bring the government-driven high-end HPC
market closer to the HPC mainstream or push these two segments further apart. This
remains to be seen.



LEARN MORE

Related Research

Related research from IDC's Technical Computing hardware program and media
articles written by IDC's Technical Computing team include the following:

 HPC End-User Site Update: RIKEN Advanced Institute for Computational
  Science (IDC #233690, March 2012)

 National Supercomputing Center in Tianjin (IDC #233971, March 2012)

 Worldwide Technical Computing 2012 Top 10 Predictions (IDC #233355, March
  2012)

 IDC Predictions 2012: High Performance Computing (IDC #WC20120221,
  February 2012)

 Worldwide Data Intensive–Focused HPC Server Systems 2011–2015 Forecast
  (IDC #232572, February 2012)

 Exploring the Big Data Market for High-Performance Computing (IDC #231572,
  November 2011)

 IDC's Worldwide Technical Server Taxonomy, 2011 (IDC #231335, November
  2011)

 High Performance Computing Center Stuttgart (IDC #230329, October 2011)

 Oak Ridge $97 Million HPC Deal Confirms New Paradigm for the Exascale Era
  (IDC #lcUS23085111, October 2011)




©2012 IDC                                    #236341                                     11
 Large-Scale Simulation Using HPC: HPC User Forum, September 2011, San
  Diego, California (IDC #230694, October 2011)

 China's Third Petascale Supercomputer Uses Homegrown Processors (IDC
  #lcUS23116111, October 2011)

 Exascale Challenges and Opportunities: HPC User Forum, September 2011,
  San Diego, California (IDC #230642, October 2011)

 The World's New Fastest Supercomputer: The K Computer at RIKEN in Japan
  (IDC #229353, July 2011)

 A Strategic Agenda for European Leadership in Supercomputing: HPC 2020 —
  IDC Final Report of the HPC Study for the DG Information Society of the
  European Commission (IDC #SR03S, September 2010)

 HPC Petascale Programs: Progress at RIKEN and AICS Plans for Petascale
  Computing (IDC #223625, June 2010)

 The Shanghai Supercomputer Center: China on the Move (IDC #222287, March
  2010)

 IDC Leads Consortium Awarded Contract to Help Develop Supercomputing
  Strategy for the European Union (IDC #prUK22194910, February 2010)

 An Overview of 2009 China TOP100 Release (IDC #221675, January 2010)

 Massive HPC Systems Could Redefine Scientific Research and Shift the Balance
  of Power Among Nations (IDC #219948, September 2009)


Synopsis

This IDC study explores the potential impact of the latest generation of large-scale
supercomputers that are just now becoming available to researchers in the United
States and around the world. These systems, along with the ones planned to be
installed over the next few years, create an opportunity to substantially change how
scientific research is accomplished in many fields.

According to Earl Joseph, IDC program vice president, High-Performance Computing,
"We see the use of large-scale supercomputers becoming a driving factor in the
balance of power between nations."




12                                          #236341                                    ©2012 IDC
Copyright Notice

This IDC research document was published as part of an IDC continuous intelligence
service, providing written research, analyst interactions, telebriefings, and
conferences. Visit www.idc.com to learn more about IDC subscription and consulting
services. To view a list of IDC offices worldwide, visit www.idc.com/offices. Please
contact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) or
sales@idc.com for information on applying the price of this document toward the
purchase of an IDC service or for information on additional copies or Web rights.

Copyright 2012 IDC. Reproduction is forbidden unless authorized. All rights reserved.




©2012 IDC                                    #236341                                    13

More Related Content

Viewers also liked

Nano hub u-nanoscaletransistors
Nano hub u-nanoscaletransistorsNano hub u-nanoscaletransistors
Nano hub u-nanoscaletransistorsChris O'Neal
 
Introducing the TPCx-HS Benchmark for Big Data
Introducing the TPCx-HS Benchmark for Big DataIntroducing the TPCx-HS Benchmark for Big Data
Introducing the TPCx-HS Benchmark for Big Datainside-BigData.com
 
Introduction to Database Benchmarking with Benchmark Factory
Introduction to Database Benchmarking with Benchmark FactoryIntroduction to Database Benchmarking with Benchmark Factory
Introduction to Database Benchmarking with Benchmark FactoryMichael Micalizzi
 

Viewers also liked (6)

Nano hub u-nanoscaletransistors
Nano hub u-nanoscaletransistorsNano hub u-nanoscaletransistors
Nano hub u-nanoscaletransistors
 
Coffee break
Coffee breakCoffee break
Coffee break
 
My Ocean Breve
My Ocean BreveMy Ocean Breve
My Ocean Breve
 
Introducing the TPCx-HS Benchmark for Big Data
Introducing the TPCx-HS Benchmark for Big DataIntroducing the TPCx-HS Benchmark for Big Data
Introducing the TPCx-HS Benchmark for Big Data
 
Introduction to Database Benchmarking with Benchmark Factory
Introduction to Database Benchmarking with Benchmark FactoryIntroduction to Database Benchmarking with Benchmark Factory
Introduction to Database Benchmarking with Benchmark Factory
 
Fujitsu_ISC10
Fujitsu_ISC10Fujitsu_ISC10
Fujitsu_ISC10
 

Similar to 236341 Idc How Nations Are Using Hpc August 2012

Sc10 slide share
Sc10 slide shareSc10 slide share
Sc10 slide shareGuy Tel-Zur
 
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...High Performance Computing Infrastructure as a Key Enabler to Engineering Des...
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...NSEAkure
 
World's Most Influential Leaders Inspiring The Tech World, 2024
World's Most Influential Leaders Inspiring The Tech World, 2024World's Most Influential Leaders Inspiring The Tech World, 2024
World's Most Influential Leaders Inspiring The Tech World, 2024Worlds Leaders Magazine
 
Parallel_Computing_future
Parallel_Computing_futureParallel_Computing_future
Parallel_Computing_futureHiroshi Ono
 
Nikravesh australia long_versionkeynote2012
Nikravesh australia long_versionkeynote2012Nikravesh australia long_versionkeynote2012
Nikravesh australia long_versionkeynote2012Masoud Nikravesh
 
Top data center trends and predictions to watch for in 2016.
Top data center trends and predictions to watch for in 2016.Top data center trends and predictions to watch for in 2016.
Top data center trends and predictions to watch for in 2016.Swaroopanand Laxmikruppaneth
 
5 biggest hpc trends 2021
5 biggest hpc trends 20215 biggest hpc trends 2021
5 biggest hpc trends 2021Sandeep Mishra
 
CC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfCC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfHasanAfwaaz1
 
Aviation Wikinomics
Aviation WikinomicsAviation Wikinomics
Aviation Wikinomicsguesta9496c4
 
High Performance Computing and Big Data: The coming wave
High Performance Computing and Big Data: The coming waveHigh Performance Computing and Big Data: The coming wave
High Performance Computing and Big Data: The coming waveIntel IT Center
 
Development Trends of Next-Generation Supercomputers
Development Trends of Next-Generation SupercomputersDevelopment Trends of Next-Generation Supercomputers
Development Trends of Next-Generation Supercomputersinside-BigData.com
 
Big data high performance computing commenting
Big data   high performance computing commentingBig data   high performance computing commenting
Big data high performance computing commentingIntel IT Center
 
next-generation-data-centers
next-generation-data-centersnext-generation-data-centers
next-generation-data-centersJason Hoffman
 
Crossing the performance chasm with open power - IBM
Crossing the performance chasm with open power - IBMCrossing the performance chasm with open power - IBM
Crossing the performance chasm with open power - IBMDiego Alberto Tamayo
 
applications-and-current-challenges-of-supercomputing-across-multiple-domains...
applications-and-current-challenges-of-supercomputing-across-multiple-domains...applications-and-current-challenges-of-supercomputing-across-multiple-domains...
applications-and-current-challenges-of-supercomputing-across-multiple-domains...Neha Gupta
 
Adaptable embedded systems
Adaptable embedded systemsAdaptable embedded systems
Adaptable embedded systemsSpringer
 

Similar to 236341 Idc How Nations Are Using Hpc August 2012 (20)

Sc10 slide share
Sc10 slide shareSc10 slide share
Sc10 slide share
 
PRObE
PRObEPRObE
PRObE
 
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...High Performance Computing Infrastructure as a Key Enabler to Engineering Des...
High Performance Computing Infrastructure as a Key Enabler to Engineering Des...
 
World's Most Influential Leaders Inspiring The Tech World, 2024
World's Most Influential Leaders Inspiring The Tech World, 2024World's Most Influential Leaders Inspiring The Tech World, 2024
World's Most Influential Leaders Inspiring The Tech World, 2024
 
Parallel_Computing_future
Parallel_Computing_futureParallel_Computing_future
Parallel_Computing_future
 
Nikravesh australia long_versionkeynote2012
Nikravesh australia long_versionkeynote2012Nikravesh australia long_versionkeynote2012
Nikravesh australia long_versionkeynote2012
 
Top data center trends and predictions to watch for in 2016.
Top data center trends and predictions to watch for in 2016.Top data center trends and predictions to watch for in 2016.
Top data center trends and predictions to watch for in 2016.
 
5 biggest hpc trends 2021
5 biggest hpc trends 20215 biggest hpc trends 2021
5 biggest hpc trends 2021
 
CC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdfCC LECTURE NOTES (1).pdf
CC LECTURE NOTES (1).pdf
 
Aviation Wikinomics
Aviation WikinomicsAviation Wikinomics
Aviation Wikinomics
 
High Performance Computing and Big Data: The coming wave
High Performance Computing and Big Data: The coming waveHigh Performance Computing and Big Data: The coming wave
High Performance Computing and Big Data: The coming wave
 
Development Trends of Next-Generation Supercomputers
Development Trends of Next-Generation SupercomputersDevelopment Trends of Next-Generation Supercomputers
Development Trends of Next-Generation Supercomputers
 
BigDataCSEKeyNote_2012
BigDataCSEKeyNote_2012BigDataCSEKeyNote_2012
BigDataCSEKeyNote_2012
 
Big data high performance computing commenting
Big data   high performance computing commentingBig data   high performance computing commenting
Big data high performance computing commenting
 
next-generation-data-centers
next-generation-data-centersnext-generation-data-centers
next-generation-data-centers
 
Crossing the performance chasm with open power - IBM
Crossing the performance chasm with open power - IBMCrossing the performance chasm with open power - IBM
Crossing the performance chasm with open power - IBM
 
applications-and-current-challenges-of-supercomputing-across-multiple-domains...
applications-and-current-challenges-of-supercomputing-across-multiple-domains...applications-and-current-challenges-of-supercomputing-across-multiple-domains...
applications-and-current-challenges-of-supercomputing-across-multiple-domains...
 
Innovation in Silicon Valley
Innovation in Silicon ValleyInnovation in Silicon Valley
Innovation in Silicon Valley
 
Big data story of success
Big data story of successBig data story of success
Big data story of success
 
Adaptable embedded systems
Adaptable embedded systemsAdaptable embedded systems
Adaptable embedded systems
 

More from Chris O'Neal

Intel Xeon Phi Hotchips Architecture Presentation
Intel Xeon Phi Hotchips Architecture PresentationIntel Xeon Phi Hotchips Architecture Presentation
Intel Xeon Phi Hotchips Architecture PresentationChris O'Neal
 
Incite Ir Final 7 19 11
Incite Ir Final 7 19 11Incite Ir Final 7 19 11
Incite Ir Final 7 19 11Chris O'Neal
 
Cloud Computing White Paper
Cloud Computing White PaperCloud Computing White Paper
Cloud Computing White PaperChris O'Neal
 
Idc Eu Study Slides 10.9.2010
Idc Eu Study Slides 10.9.2010Idc Eu Study Slides 10.9.2010
Idc Eu Study Slides 10.9.2010Chris O'Neal
 
Tolly210137 Force10 Networks E1200i Energy
Tolly210137 Force10 Networks E1200i EnergyTolly210137 Force10 Networks E1200i Energy
Tolly210137 Force10 Networks E1200i EnergyChris O'Neal
 
Rogue Wave Corporate Vision(P) 5.19.10
Rogue Wave Corporate Vision(P)   5.19.10Rogue Wave Corporate Vision(P)   5.19.10
Rogue Wave Corporate Vision(P) 5.19.10Chris O'Neal
 
Hpc R2 Beta2 Press Deck 2010 04 07
Hpc R2 Beta2 Press Deck 2010 04 07Hpc R2 Beta2 Press Deck 2010 04 07
Hpc R2 Beta2 Press Deck 2010 04 07Chris O'Neal
 
Q Dell M23 Leap V2x
Q Dell M23 Leap   V2xQ Dell M23 Leap   V2x
Q Dell M23 Leap V2xChris O'Neal
 
Fca Product Overview Feb222010 As
Fca Product Overview Feb222010 AsFca Product Overview Feb222010 As
Fca Product Overview Feb222010 AsChris O'Neal
 
Idc Hpc Web Conf Predictions 2010 Final
Idc Hpc Web Conf Predictions 2010 FinalIdc Hpc Web Conf Predictions 2010 Final
Idc Hpc Web Conf Predictions 2010 FinalChris O'Neal
 
Adva Cloud Computing Final
Adva Cloud Computing FinalAdva Cloud Computing Final
Adva Cloud Computing FinalChris O'Neal
 

More from Chris O'Neal (16)

Intel Xeon Phi Hotchips Architecture Presentation
Intel Xeon Phi Hotchips Architecture PresentationIntel Xeon Phi Hotchips Architecture Presentation
Intel Xeon Phi Hotchips Architecture Presentation
 
Incite Ir Final 7 19 11
Incite Ir Final 7 19 11Incite Ir Final 7 19 11
Incite Ir Final 7 19 11
 
Ersa11 Holland
Ersa11 HollandErsa11 Holland
Ersa11 Holland
 
Cloud Computing White Paper
Cloud Computing White PaperCloud Computing White Paper
Cloud Computing White Paper
 
Idc Eu Study Slides 10.9.2010
Idc Eu Study Slides 10.9.2010Idc Eu Study Slides 10.9.2010
Idc Eu Study Slides 10.9.2010
 
Tolly210137 Force10 Networks E1200i Energy
Tolly210137 Force10 Networks E1200i EnergyTolly210137 Force10 Networks E1200i Energy
Tolly210137 Force10 Networks E1200i Energy
 
Tachion
TachionTachion
Tachion
 
Longbiofuel
LongbiofuelLongbiofuel
Longbiofuel
 
Casl Fact Sht
Casl Fact ShtCasl Fact Sht
Casl Fact Sht
 
Rogue Wave Corporate Vision(P) 5.19.10
Rogue Wave Corporate Vision(P)   5.19.10Rogue Wave Corporate Vision(P)   5.19.10
Rogue Wave Corporate Vision(P) 5.19.10
 
Hpc R2 Beta2 Press Deck 2010 04 07
Hpc R2 Beta2 Press Deck 2010 04 07Hpc R2 Beta2 Press Deck 2010 04 07
Hpc R2 Beta2 Press Deck 2010 04 07
 
Q Dell M23 Leap V2x
Q Dell M23 Leap   V2xQ Dell M23 Leap   V2x
Q Dell M23 Leap V2x
 
Fca Product Overview Feb222010 As
Fca Product Overview Feb222010 AsFca Product Overview Feb222010 As
Fca Product Overview Feb222010 As
 
Idc Hpc Web Conf Predictions 2010 Final
Idc Hpc Web Conf Predictions 2010 FinalIdc Hpc Web Conf Predictions 2010 Final
Idc Hpc Web Conf Predictions 2010 Final
 
Adva Cloud Computing Final
Adva Cloud Computing FinalAdva Cloud Computing Final
Adva Cloud Computing Final
 
Hpc Press Slides
Hpc Press SlidesHpc Press Slides
Hpc Press Slides
 

236341 Idc How Nations Are Using Hpc August 2012

  • 1. INDUSTRY DEVELOPMENTS AND MODELS How Nations Are Applying High-End Petascale Supercomputers for Innovation and Economic Advancement in 2012 Earl C. Joseph, Ph.D. Chirag Dekate, Ph.D. Steve Conway IDC OPINION www.idc.com There is a growing contingent of nations around the world that are investing considerable resources to install supercomputers with the potential to reshape how science and engineering are accomplished. In many cases, the cost of admission for each supercomputer now exceeds $100 million and could soon reach $1 billion. F.508.935.4015 Some countries are laying the groundwork to build "fleets" of these truly large-scale computers to create a sustainable long-term competitive advantage. In a growing number of cases, it is becoming a requirement to show or "prove" the return on investment (ROI) of the supercomputer and/or the entire center. With the current global economic pressures, nations need to better understand the ROI of making P.508.872.8200 these large investments and decide if even larger investments should be made. It's not an easy path, as there are sizable challenges in implementing these truly super supercomputers:  Along with the cost of the computers, there are substantial costs for power, Global Headquarters: 5 Speen Street Framingham, MA 01701 USA cooling, and storage.  In most cases, new datacenters have to be built, and often entire new research organizations are created around the new or expanded mission.  In many situations, the application software needs to be rewritten or even fundamentally redesigned in order to perform well on these systems.  But the most important ingredient is still the research team. The pool of talent that can take advantage of these systems is very limited, and competition is heating up to attract them. Filing Information: August 2012, IDC #236341, Volume: 1 Technical Computing: Industry Developments and Models
  • 2.
  • 3. IN THIS STUDY This study explores the potential impact of the latest generation of large-scale supercomputers that are just now becoming available to researchers around the world. It also provides a list of the current petascale-class supercomputers around the world. These systems, along with others planned to be installed over the next few years, create an opportunity to substantially change how scientific research is accomplished in many fields. IDC has conducted a number of studies around petascale initiatives, petascale applications, buyer requirements, and what will be required to compete in the future. IDC has also worked with the U.S. Council on Competitiveness on a number of HPC studies, and the council has accurately summarized the situation with the phrase that companies and nations must "outcompute to outcompete." SITUATION OVERVIEW Over the past five years, there has been a race around the world to build supercomputers with peak and actual (sustained) performance of 1PFLOPS (a petaflop equals a computer that can calculate a million million operations every second). The next-generation supercomputers are targeting 1,000 times more performance, called exascale computing (10^18 operations per second), starting in 2018. Currently, systems in the 10–20PFLOPS range are being installed, and 100PFLOPS systems are expected by 2015. Already, the petascale and near-petascale level of performance is allowing science of unprecedented complexity to be accomplished on a computer:  Scientific research beyond what is feasible in a physical laboratory is becoming possible for a larger set of researchers.  First principles–based science is becoming possible in more disciplines.  Research based on the nano level and at the molecular level is becoming possible.  Future research at an atomic level will soon become possible for designing many everyday items.  Analysis via simulation instead of live experiments will be possible on a greatly increased scale. In the United States, the race for petascale supercomputers was started when DARPA launched a multiphase runoff program to design innovative, sustained petascale-class supercomputers that also addressed a growing concern around user productivity. The great unanswered (and poorly addressed) question in the high-end HPC market is: how can a broad set of users actually make use of the new generation of computers that are so highly parallel? Some of the complexities of this problem are:  Processor speeds in general are not getting faster, so users have to find ways to use more processors to obtain speedups. ©2012 IDC #236341 1
  • 4.  The memory bandwidth in and out of each processor core (and the per-core memory size) is declining at an alarming rate and is projected to continue to decline over the foreseeable future.  Many user applications were designed and/or written many years ago, often over 20 years ago. They tended to be designed around a single processor with strong memory access capabilities (e.g., vector computers). These codes have been enhanced and improved over the years, but in most cases, they need to be fundamentally redesigned to work at scale on the newest supercomputers. The current situation is opening the door for new applications that can take advantage of the latest computers. Many current applications have a more difficult time going through this level of transformation because of the lack of talent, the steep costs of redesigns, the need to bring all of the legacy features forward, and an unclear business model. A debate is growing about whether a more open source application model may work in many areas. The Use of Higher-End Computer Simulation Model–Based Science Figure 1 shows that the use of supercomputer-class HPC systems was roughly stable (from a revenue perspective) up to 2008; then starting in 2009, major growth began. IDC defines a supercomputer as a technical server that costs over $500,000. The strongest growth has been for supercomputers with a price over $3 million. Supercomputer revenue was running at around $4.5 billion a year in 2011. Figure 2 shows IDC's forecast supercomputer revenue for 2006–2016. This portion of the HPC market is expected to grow to over $5 billion in a few years and get close to $6 billion by 2016. Note that with system prices growing at the top of the market, yearly revenue will likely show major swings up and down, as having singular sales near $1 billion can have major impacts (e.g., two $1 billion sales in a single year is over a third of the entire market sector). IDC expects that the sales of very large HPC systems, those selling for over $50 million, will likely increase over the next few years, as more countries invest in large simulation capabilities. In some countries, this may reduce their spending on more moderate-size systems as they pool resources to invest in a few very large systems. One example is the new K system in Japan that has a price of over $500 million. There is a strong possibility that multiple HPC systems costing close to $1 billion will be built over the next five years. 2 #236341 ©2012 IDC
  • 5. FIGURE 1 Worldwide Supercomputer Technical Server Revenue, 2006–2011 Source: IDC, 2012 FIGURE 2 Worldwide Supercomputer Technical Server Revenue, 2006–2016 Source: IDC, 2012 ©2012 IDC #236341 3
  • 6. Petascale Supercomputers and Their Planned Uses (as of August 2012) There are currently 20 known petascale-class supercomputers in the world:  The top-ranked 20-petaflop Sequoia system, hosted at LLNL, is used to ensure the safety and reliability of the U.S. nuclear weapons stockpile and advancing research in various domains including astronomy, energy, human genome science, and climate change.  The 11.28-petaflop K computer hosted at RIKEN is planned to be used for climate research, disaster prevention, and medical research.  The Mira supercomputer hosted at ANL, providing over 10.06 petaflops of compute capability, is devoted entirely to open science applications such as climate studies, engine design, cosmology, and battery research.  The 3.18-petaflop SuperMUC supercomputer hosted at Leibniz-Rechenzentrum is scheduled to be used for three key disciplines — astrophysics, engineering and energy, and chemistry and materials research.  The Tianhe-1A supercomputer at the National Supercomputing Center in Tianjin, China, providing over 4.70 petaflops of compute capability, has been used to execute record-breaking scientific simulations to further research in solar energy and a petascale molecular dynamics simulation that utilizes both CPUs and GPUs.  At ORNL, the Jaguar supercomputer system, which currently provides 2.63 petaflops of computation capability, has been used for a diverse range of simulations, from probing the potential for new energy sources to dissecting the dynamics of climate change to manipulating protein functions. Jaguar has also been used in studying complex problems including supernovae, nuclear fusion, photosynthesis research enabling next generation of ethanol, and complex ecological phenomena.  Molecular dynamics, protein simulations, quantum chemistry and molecular dynamics, engineering and structural simulations, and astrophysics are some of the applications that are being ported to the 2.09-petaflop FERMI supercomputer hosted at CINECA (Italy).  The 1.68-petaflop JUQUEEN supercomputer at Forschungszentrum Jülich is planned to be used for a broad range of applications including computational plasma physics, protein folding, quantum information processing, and pedestrian dynamics.  French researchers have simulated the first-ever simulation of the entire observable universe from Big Bang to the present day comprising over 500 billion particles on the 1.67-petaflop CURIE supercomputer operated by Grand Equipement National de Calcul Intensif (GENCI) at the CEA in France. 4 #236341 ©2012 IDC
  • 7.  The 2.98-petaflop Nebulae supercomputer hosted at the National Supercomputing Center in Shenzhen, China, has been used in geological simulations including oil exploration, space exploration, and avionics, among other large-scale traditional scientific and engineering simulations.  The Pleiades supercomputer at NASA Ames, which provides over 1.73 petaflops of compute capability, is being applied to understand complex problems like calculation of size, orbit, and location of planets surrounding stars; research and development of next-generation space launch vehicles; astrophysics simulations; and climate change simulations.  The 1.52-petaflop Helios supercomputer at the International Fusion Energy Research Center, Japan, is dedicated to researching and advancing fusion energy research as part of the ITER project and is used to model the behavior of plasma and ultra-high-temperature ionized gas in intensive magnetic fields and design materials that are subjected to extreme temperatures and pressures.  The 1.47-petaflop Blue Joule supercomputer hosted at the Science and Technology Facilities Council in Daresbury, the United Kingdom, is being used to develop a next-generation weather forecasting model for the United Kingdom that would simulate the winds, temperature, and pressure that, in combination with dynamic processes such as cloud formation, would allow for the simulation of changing weather conditions.  The TSUBAME2.0 supercomputer at the Tokyo Institute of Technology, which enables over 2.29 petaflops of capability, has been used to run a Gordon Bell award-winning simulation involving complex dendritic structures, utilizing over 16,000 CPUs and 4,000 GPUs.  The 1.37-petaflop Cielo supercomputer hosted at Los Alamos National Laboratory enables the scientists to increase their understanding of complex physics and improve confidence in the predictive capability for stockpile stewardship. Cielo is primarily utilized to perform milestone weapons calculations.  The Cray XT5 Hopper supercomputer at Lawrence Berkeley National Lab is capable of 1.29 petaflops and has been used to process over 3.5 terabases of genome sequence data comprising over 5,000 samples from 242 people, requiring over 2 million computer hours to complete the project.  The 1.25-petaflop Tera 100 supercomputer hosted at the CEA in France has been used for solving problems in diverse domains including astrophysics, life sciences, plasma accelerators, seismic simulations, molecular dynamics, and defense applications.  The SPARC64-based 1.04-petaflop Oakleaf-FX supercomputer in Japan is planned to be used for a wide variety of simulations, including earth sciences, weather modeling, seismology, hydrodynamics, biology, materials science, astrophysics, solid state physics, and energy simulations. ©2012 IDC #236341 5
  • 8.  The 1.26-petaflop DiRAC supercomputer, based on the BlueGene/Q architecture and hosted at University of Edinburgh, has been used to study complex astrophysics simulations, including physics of dark matter, and computational techniques like lattice quantum chromodynamics.  IBM's "Roadrunner," the world's first petascale supercomputer, is being used primarily to ensure the safety and reliability of America's nuclear weapons stockpile but also to tackle problems in astronomy, energy, human genome science, and climate change. In addition, the United States, China, Russia, the European Union (EU), Japan, India, and others are investing in several petascale systems (10–100 petaflops) in preparation to reach the exascale dream. Petascale Applications in Use Today Material science and nanotechnology typically rank high on the list of petascale candidate applications because they can never get enough computing power. Weather forecasting and climate science also have insatiable appetites for compute power, as do combustion modeling, astrophysics, particle physics, drug discovery, and newer fields including fusion energy research. Scientists today are performing ab initio calculations with a few thousand atoms, but they would like to increase that to hundreds of thousands of atoms. Note: Ab initio quantum chemistry methods are computational chemistry methods based on quantum chemistry. The term ab initio indicates that the calculation is from first principles and that no empirical data is used. Additional intriguing examples include real-time magnetic resonance imaging (MRI) during surgery, cosmology, plasma rocket engines for space exploration, functional brain modeling, design of advanced aircraft and, last but hardly least, severe weather prediction. Table 1 shows a broad landscape of the current petascale research areas. Many of these are already being planned for running at the exascale level. 6 #236341 ©2012 IDC
  • 9. TABLE 1 Current Petascale Research Areas Sector Application Description Potential Breakthrough/Value of the Work Aerospace Advanced aircraft, spacecraft design Improved safety and fuel efficiency Alternative energy Identify and test newer energy sources National energy security Automotive Full fidelity crash testing Improved safety, reduced noise and fuel consumption Automotive True computational steering Cars that travel 150,000mi without repairs Business intelligence More complex analytics algorithms Improved business decision making and management Electrical power grid distribution Track and distribute power Better distribution efficiency and response to changing needs Global climate modeling High-resolution climate models Improved response to climate change Cryptography and signal Process digital code and signals Improved national defense capabilities processing DNA sequence analysis Full genome analysis and comparison Person-specific drugs (pharmacogenetics) Digital brain and heart High-resolution, functional models New disease-fighting capabilities Haptics Simulate sense of touch Improved surgical training and planning Nanotechnology Model materials at nanoscale Pioneering new materials for many fields National-scale economic modeling Model national economy Better economic planning and responsiveness Nuclear weapons stewardship Test stability, functionality (nonlive) Ensure weapons safety and preparedness Oil and gas Improved seismic, reservoir simulation Improved discovery and extraction of oil and gas resources Pandemic diseases, designer Identify the nature of and response to Combat new disease strains and plagues diseases bioterrorism Pharmaceutical Virtual surgical planning Greatly reduce health costs while greatly increasing the quality of healthcare Real-time medical imaging Process medical images without delay Improved patient care and satisfaction Weather forecasting High-resolution forecast models Better prediction of severe storms (reducing damage to lives and property) Source: IDC, 2012 ©2012 IDC #236341 7
  • 10. Scientific Breakthroughs Possible with Large-Scale HPC Systems  Simulating about 100 mesocircuits using current simulation capabilities (With larger-scale systems and supporting computational models, researchers can potentially simulate over 1,000 times the current cortical simulators to develop a first simulation matching the scale of a human brain.)  High-fidelity combustion simulations, which require extreme-scale computing to predict the behavior of alternative fuels in novel fuel-efficient, clean engines and so facilitate the design of optimal combined engine-fuel system  extreme-scale systems that can be used to improve the predictive integrated modeling of nuclear systems:  Capability to simulate multicomponent chemical reacting solutions  Computations and simulations of 3D plant design  Multiscale modeling and simulation of combustion at exascale, which is essential for enabling innovative prediction and design; extreme-scale systems that enable:  Design and optimization of diesel/HCCI engines, in geometry with surrogate large-molecule fuels representative of actual fuels  Direct numerical simulation of turbulent jet flames at engine pressures (30– 50atm) with isobutanol (50–100 species) for diesel or HCCI engine thermochemical conditions  Full-scale molecular dynamics simulations with on-the-fly ab initio force field to study the combined physical and chemical processes affecting nanoparticle and soot formation processes  Improvement in scalability of applications to enable multiscale materials design, critical for improving the efficiency and cost of photovoltaics:  Extreme-scale computations that help accurately predict the combined effect of multiple interacting structures and their effects upon electronic excitation and ultimate performance of a photovoltaic cell  Modeling the entire U.S. power grid from every household power meter to every power-generation source, with feedback to all users on how to conserve energy, and then testing new alternative energy solutions against this "full" model of the whole U.S. power grid  Modeling the evolution possibilities for DNA — going out in long time frames (e.g., several million years) — and investigating the potential impact this could have on human physiology  Extreme-scale systems to enable full quantum simulations that are required for efficient storage of energy in chemical bonds found in catalysts (Current systems can simulate up to 1,000 atoms at 3nm resolution. Higher-resolution simulations 8 #236341 ©2012 IDC
  • 11. on the order of 6nm with 10,000 atoms are needed to effectively conduct multiscale models.)  Modeling of the entire world's economy to first obtain the ability to see problems before they happen and then the ability to pretest solutions to better understand their likely effects and eventually find ways to assist the economies and avoid severe downturns  The design of automobiles with 150,000mi reliability, with no failures  Designing drugs that are directly based on each individual's DNA, via simulation of protein folding for both disease control and specific drug design  Designing solar panels that are at least 50% efficient and are low cost to produce; for example, a 10 x 20ft panel at consumer prices to be purchased at consumer home improvement stores and no more difficult to install than a lawn sprinkler system, where a few of these panels can generate more electricity than is needed for a house and two cars  Full airflow around an entire vehicle like a helicopter that is accurate enough to remove the need for any wind tunnel testing  Petascale systems and beyond to also help in parallel direct numerical simulation of combustion in a low-temperature, high-pressure environment shedding light on how jet flames ignite and become stable in diesel environments (The systems can also accelerate the development of predictive models for clean and efficient combustion of new transportation fuels.)  Simulations to help investigate Cooper pairs, the electron pairings that transform materials into superconductors (Superconducting research can be accelerated to help better understand the difference in temperature at which various materials become superconductors, with the goal of developing superconductors that do not require cooling.) FUTURE OUTLOOK HPC and the Balance of Power Between Nations The competition among nations for economic, scientific, and military leadership and advantage is increasingly being driven by computational capabilities. Very large-scale HPC systems will tend to push nations that employ them ahead of those that don't. A set of petascale supercomputers, along with the researchers to use them, can reshape how a nation advances itself in both scientific innovation and economic industrial growth. This is why countries/regions like the United States, China, the European Union, Japan and, even, Russia are working to install multiple petascale computers with an eye on exascale in the future. ©2012 IDC #236341 9
  • 12. Some immediate advantages of installing a fleet of petascale supercomputers in a country are:  It tends to keep the top scientists, engineers, and researchers within your country.  It helps attract top scientists and engineers around the world to your country.  It motivates more students within your country to go into scientific and engineering disciplines — building the intellectual base for the future. The major mid- to longer-term advantages include:  Increasing the economic size (GDP) and growth of a nation by having more competitive products and services, faster time to market, and the ability to manufacture products that other nations can't  Increasing the capability of homeland defense and military security — nations without this scale of computational power will more often find themselves asking: What happened? The Need for Better Measurement of the ROI from HPC Investments In a growing number of cases, it is becoming a requirement to show or "prove" the return on investment of the supercomputer and/or the entire center. With the current global economic pressures, nations need to better understand the ROI of making these large investments and decide if even larger investments should be made. IDC is researching new ways to better measure and report the ROI with HPC for both science and industry. For science, ROI can be measured by focusing on innovation and discovery, while for industry, it's focused on dollars (profits, revenues, or lower costs) and making better products/services. The new IDC HPC innovation award program is one way for collecting a broad set of ROI examples. ESSENTIAL GUIDANCE Petascale initiatives will benefit a substantial number of users. Although the initial crop of petascale systems and their antecedents are still only a few in number, many thousands of users will have access to these systems (e.g., the U.S. Department of Energy's INCITE program and analogous undertakings). Industrial users will also gain access to these systems for their most advanced research. Petascale and exascale initiatives will intensify efforts to increase scalable application performance. Sites that are advancing in phases toward petascale/exascale systems, often in direct or quasi competition with other sites for funding, will be highly motivated to demonstrate impressive performance gains that may also result in scientific breakthroughs. To achieve these performance gains, researchers will need to intensify their efforts to boost scalable application performance by taming large- scale, many-core parallelism. These efforts will then benefit the larger HPC community. Over time, it will likely also benefit the computer industry as a whole. 10 #236341 ©2012 IDC
  • 13. Some important applications will likely be left behind. Petascale initiatives will not benefit applications that do not scale well on today's HPC systems, unless these applications are fundamentally rewritten. Rewriting is an especially daunting and costly proposition for some important engineering codes that require a decade or more to certify and incrementally enhance. Some petascale developments and technologies will likely "trickle down" to the mainstream HPC market, while others may not. Evolutionary improvements, especially in programming languages and file systems, will be most readily accepted by mainstream HPC users but may be inadequate for exploiting the potential of petascale/exascale systems. Conversely, revolutionary improvements (e.g., new programming languages) that greatly facilitate work on petascale systems may be rejected by some HPC users as requiring too painful of a change. The larger issue is whether petascale developments will bring the government-driven high-end HPC market closer to the HPC mainstream or push these two segments further apart. This remains to be seen. LEARN MORE Related Research Related research from IDC's Technical Computing hardware program and media articles written by IDC's Technical Computing team include the following:  HPC End-User Site Update: RIKEN Advanced Institute for Computational Science (IDC #233690, March 2012)  National Supercomputing Center in Tianjin (IDC #233971, March 2012)  Worldwide Technical Computing 2012 Top 10 Predictions (IDC #233355, March 2012)  IDC Predictions 2012: High Performance Computing (IDC #WC20120221, February 2012)  Worldwide Data Intensive–Focused HPC Server Systems 2011–2015 Forecast (IDC #232572, February 2012)  Exploring the Big Data Market for High-Performance Computing (IDC #231572, November 2011)  IDC's Worldwide Technical Server Taxonomy, 2011 (IDC #231335, November 2011)  High Performance Computing Center Stuttgart (IDC #230329, October 2011)  Oak Ridge $97 Million HPC Deal Confirms New Paradigm for the Exascale Era (IDC #lcUS23085111, October 2011) ©2012 IDC #236341 11
  • 14.  Large-Scale Simulation Using HPC: HPC User Forum, September 2011, San Diego, California (IDC #230694, October 2011)  China's Third Petascale Supercomputer Uses Homegrown Processors (IDC #lcUS23116111, October 2011)  Exascale Challenges and Opportunities: HPC User Forum, September 2011, San Diego, California (IDC #230642, October 2011)  The World's New Fastest Supercomputer: The K Computer at RIKEN in Japan (IDC #229353, July 2011)  A Strategic Agenda for European Leadership in Supercomputing: HPC 2020 — IDC Final Report of the HPC Study for the DG Information Society of the European Commission (IDC #SR03S, September 2010)  HPC Petascale Programs: Progress at RIKEN and AICS Plans for Petascale Computing (IDC #223625, June 2010)  The Shanghai Supercomputer Center: China on the Move (IDC #222287, March 2010)  IDC Leads Consortium Awarded Contract to Help Develop Supercomputing Strategy for the European Union (IDC #prUK22194910, February 2010)  An Overview of 2009 China TOP100 Release (IDC #221675, January 2010)  Massive HPC Systems Could Redefine Scientific Research and Shift the Balance of Power Among Nations (IDC #219948, September 2009) Synopsis This IDC study explores the potential impact of the latest generation of large-scale supercomputers that are just now becoming available to researchers in the United States and around the world. These systems, along with the ones planned to be installed over the next few years, create an opportunity to substantially change how scientific research is accomplished in many fields. According to Earl Joseph, IDC program vice president, High-Performance Computing, "We see the use of large-scale supercomputers becoming a driving factor in the balance of power between nations." 12 #236341 ©2012 IDC
  • 15. Copyright Notice This IDC research document was published as part of an IDC continuous intelligence service, providing written research, analyst interactions, telebriefings, and conferences. Visit www.idc.com to learn more about IDC subscription and consulting services. To view a list of IDC offices worldwide, visit www.idc.com/offices. Please contact the IDC Hotline at 800.343.4952, ext. 7988 (or +1.508.988.7988) or sales@idc.com for information on applying the price of this document toward the purchase of an IDC service or for information on additional copies or Web rights. Copyright 2012 IDC. Reproduction is forbidden unless authorized. All rights reserved. ©2012 IDC #236341 13