3. 3
Here are the “Top Five”
stories
highlighting what’s hot in
High Performance
Computing.
4. 4
READ MORE
It’s why our CEO — Jen-Hsun Huang — will
headline our presence at next week’s SC16 annual
supercomputing show at Salt Lake City, Utah, an
event that draws 14,000 researchers, national lab
directors, and others from around the world.
Jen-Hsun will kick off our presence at the show on
Monday at 7:30pm MT with an appearance at the
NVIDIA Theater we’ve set up at our booth on the
show floor. Over the course of the week, our
NVIDIA Theater will be the setting for talks from
more than a dozen supercomputing experts.
Over the past few centuries, nothing has done more to make lives better than science. And over the past
few decades, supercomputing has emerged as a key tool for pushing science forward.
NVIDIA Out in Force at Supercomputing ‘16
5. 5
READ MORE
Gordon Bell Prize finalist uses GPUs to speed up the design of
environmentally friendly aircraft
Lighter planes emit lower amounts of greenhouse gases, and many designers have zeroed in on reducing the
weight of jet engine turbines. The researchers created open source software that lets designers use GPU-
powered supercomputers to more efficiently and accurately simulate how new designs perform.
This work earned Vincent and his team a spot as one of
six finalists for the ACM Gordon Bell Prize, considered
the “Nobel Prize of supercomputing.” Awarded annually
at the Supercomputing conference, the prize recognizes
outstanding scientific achievement on the biggest and
most powerful supercomputers.
The next Gordon Bell winner will be crowned at SC16, to
be held Nov. 13-18 in Salt Lake City. On Nov. 16 at the
show, Vincent will present his paper on green aviation.
6. 6
Going into SC16, CENATE Flexes It’s Growing Muscle with NVIDIA GPUs
READ MORE
“We have established the lofty goal for us to even design
some neuromorphic technologies that are doing machine
learning natively and not as you can do machine learning
for example on a GPU in which you sort of come in from
behind and map machine workload to the architecture of
the GPU,” says Adolfy Hoisie, PNNL’s chief scientist for
computing and CENATE’s principal investigator and director.
All things (time and money) being available, “We would like
those neuromorphic systems chips, whatever, to actually
cast them in silicon.”
In September, the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National
Laboratory (PNNL) took possession of NVIDIA’s DGX-1 GPU-based (Pascal 100) supercomputer. More on what they
are doing with it later. Soon, IBM will deliver its Con Tutto memory technology. Data Vortex’s advanced switch
technology is already in-house, along with products (and ideas) from a handful of other technology
heavyweights. Now entering its second year, CENATE already has some potent equipment and ambitious ideas.
7. 7
ORNL, Tokyo Tech, and ETH Zurich Announce the ADAC Consortium
OAK RIDGE, Tenn., Nov. 10, 2016--Leaders in hybrid accelerated high-performance computing (HPC) in the
United States (U.S.), Japan, and Switzerland have signed a memorandum of understanding (MOU)
establishing an international institute dedicated to common goals, the sharing of HPC expertise, and
forward-thinking evaluation of computing architecture.
The MOU authorizes the creation of the Accelerated
Data Analytics and Computing (ADAC) institute to
support collaborative projects and programs that
bridge the respective HPC missions of the U.S.
Department of Energy's (DOE) Oak Ridge National
Laboratory (ORNL), the Tokyo Institute of Technology
(Tokyo Tech), and the Swiss Federal Institute of
Technology, Zurich (ETH Zurich). All three
organizations manage HPC centers that run large,
GPU-accelerated supercomputers and provide key HPC
capabilities to academia, government, and industry to
solve many of the world's most complex and pressing
scientific problems.
READ MORE
8. 8
READ MORE
Progress in Making Parallel Code Easier & More Portable with OpenACC
One tool does not yet span all of these different options, and perhaps it
never will. But the OpenACC community, which was founded by Cray,
PGI, NVIDIA, and CAPS back in November 2011 to deliver a common way
to put parallelization directives into code for compilers, has made great
progress along with other organizations with a vested interest in HPC
code portability and acceleration on various processors and coprocessors,
to realize this goal. At the upcoming SC16 supercomputing conference,
which occurs next week, the OpenACC community will be talking about
its progress and also hosting meetings to try to get the OpenMP and
OpenACC community members together to talk about how they can work
together despite the very different ways in which their approaches
express parallelism in code.
To get a head start on SC16, Michael Wolfe, technical chair for OpenACC and the technology lead at compiler
maker The Portland Group, which graphics chip maker Nvidia acquired in July 2013, and Duncan Poole,
president of OpenACC and partner alliances lead at Nvidia, sat down with The Next Platform to give us a
sense of the progress that has been made in helping to make parallel code easier and more portable.