4. HERE ARE THE “TOP FIVE’ STORIES
HIGHLIGHTING WHAT’S HOT IN HPC AND AI
TOP 5
5. TOP 5
1. Momentum Builds for US Exascale
2. HPC Connects in Denver: Student Cluster Competition
3. Facebook’s Expanding Machine Learning Infrastructure
4. Steve Oberlin from NVIDIA Presents: HPC Exascale & AI
5. Delivering Predictive Outcomes with Superhuman Knowledge
6. MOMENTUM BUILDS FOR US EXASCALE
First and most visible, is the initial installation of the SC
Summit system at Oak Ridge National Laboratory (ORNL)
and the NNSA Sierra system at Lawrence Livermore
National Laboratory (LLNL). Both systems are being built
by IBM using Power9 processors with Nvidia GPU co-
processors. The machines will have two Power9 CPUs per
system board and will use a Mellenox InfinBand
interconnection network.
Beyond that, the architecture of each machine is slightly
different. The ORNL Summit machine will use six Nvidia
Volta GPUs per two Power9 CPUs on a system board and
will use NVLink to connect to 512 GB of memory.
1
ARTICLE
7. HPC CONNECTS IN DENVER:
STUDENT CLUSTER COMPETITION
2
All participants opted for GPUs this year with each team
using either V100 or P100 GPUs from Nvidia. This year also
saw the use of IBM Power 8 processors, and the team from
Illinois choose to use AMD CPUs for their system. Teams can
use any configuration of hardware, but are capped at 3,000
watts for the total system.
This year the winner was Nanyang Technological University,
which achieved a Linpack score of 51.8 Teraflops. The
students opted for a two-node system with a total of 88
cores combined with 8 V100 GPUs and an EDR interconnect.
ARTICLE
8. FACEBOOK’S EXPANDING MACHINE LEARNING
INFRASTRUCTURE
3
ARTICLE
In THE company’s “Big Basin” system unveiled at OCP
Summit last year is a successor to the first generation
“Big Sur” machine that the social media giant
unveiled at the Neural Information Processing
Systems conference in December 2015. As we noted
at the release in a deep dive into the architecture,
the Big Sur machine crammed eight of Nvidia’s Tesla
M40 accelerators, which slide into PCI-Express 3.0 x16
slots and which has 12 GB of GDDR5 frame buffer
memory for CUDA applications to play in, and two
“Haswell” Xeon E5 processors into a fairly tall
chassis. Since then, the design has been extended to
support the latest Nvidia Volta V100 GPUs.
9. STEVE OBERLIN FROM NVIDIA PRESENTS:
HPC EXASCALE AND AI
Steve Oberlin is responsible for NVIDIA’s Tesla roadmap
and architecture. Tesla GPUs are NVIDIA’s flagship
processors for high performance computing, delivering
extreme parallel processing, unrivaled processing
power, and world-leading efficiency. They are playing a
critical part in the race to build exascale computers to
tackle the world’s most complex computational
challenges in science and industry.
In this video from SC17, Steve Oberlin from NVIDIA
presents: HPC Exascale & AI.
4
ARTICLE & VIDEO
10. DELIVERING PREDICTIVE OUTCOMES WITH
SUPERHUMAN KNOWLEDGE
5
Massive data growth and advances in acceleration
technologies are pushing modern computing capabilities
to unprecedented levels and changing the face of entire
industries. Today’s organizations are quickly realizing
that the more data they have the more they can learn,
and powerful new techniques like artificial intelligence
(AI) and deep learning are helping them convert that
data into actionable intelligence that can transform
nearly every aspect of their business. NVIDIA GPUs and
Hewlett Packard Enterprise (HPE) high performance
computing (HPC) platforms are accelerating these
capabilities and helping organizations arrive at deeper
insights, enable dynamic correlation, and deliver
predictive outcomes with superhuman knowledge.
ARTICLE