1. SuperGreen Computing
Superconducting Computers as Green Technology
by Frank Ortmann
Superconducting computing has been 'the next great thing' since the 1960s, promising
speeds that appear out of reach of semiconductor technologies, and yet it has remained
stuck in a few niche applications. The demand for green technologies in large-scale
computing (LSC) may change everything.
Superconducting digital computers would not appear different to an end user, although
the underlying technology has significant differences that give it unique advantages.
Superconducting computers make use of the unique properties of superconductors,
materials that lose all DC electrical resistance below some critical temperature that
depends on the material. Niobium has a critical temperature of 9 K and has been
used to develop the most complex superconducting integrated circuits to date.
Niobium-based circuits typically operate at about 4 K (-269 °C, the temperature of
liquid helium) to balance the need to operate well below the critical temperature for
good superconducting properties and at as high a temperature as possible to keep
refrigeration costs down. The cryogenic refrigeration necessary to provide such low
temperatures was once considered to be an obstacle, but this problem has been solved
as demonstrated by the many magnetic resonance imaging (MRI) machines operating
in hospitals around the world. Also, where semiconductor electronics use transistors to
perform logic functions as part of complementary metal–oxide–semiconductor (CMOS)
technology, superconducting circuits make use of Josephson junctions (JJ) as part of
single flux quantum (SFQ) technology. The advantages of superconducting technology
are the ability to perform logic operations and to transmit data faster and using far less
energy.
The reasons for superconducting computing's failure to launch are many, but boil down
to LSC vendors and consumers not yet seeing its advantage over CMOS, the dominant
semiconductor technology since the 1980s. Recent developments in the world of LSC
make energy efficiency of key importance. Counter-intuitively, this new need for green
technology has created an opportunity for superconducting computing to play a major
role and perhaps become the dominant technology.
Large Scale Computing Classes
LSC is dominated by two classes of machines with different applications. Power
requirements and energy efficiency (operations per unit of energy) have become
1
2. important for both classes.
Supercomputers are created to perform tremendous numbers of operations per second
to simulate things like weather/climate, automobile crashes, combustion, biological
processes, cosmological events, etc. The TOP500 project has maintained a list of the
500 most powerful supercomputers in the world since 1993, ranking them according to
their performance on the LINPACK Benchmark. More recently, supercomputers have
been ranked according to their energy efficiency too. A focus on the energy use of
supercomputers has resulted in the creation of the Green500 project, which has listed
the 500 most energy efficient supercomputers since 2007. Other terms used for this
class of computing includes high-performance computing (HPC), high-end computing
(HEC) and capability computing.
Data centres are facilities built to house a large number of computer systems (and the
infrastructure required to communicate with the outside world, remain cool and provide
an uninterrupted service) that store huge quantities of data and continue to grow as
more information storage and processing is performed in the cloud. Lists of data
centres include Datacentres.org and the US Data Center List. Other terms for this class
of machines includes server farms, mainframes and capacity computers.
Typically supercomputers and data centres have been treated as distinct, but some
overlap between the two paradigms is starting to appear. Amazon’s Cluster Compute
Eight Extra Large project is an attempt to provide customers with the ability to make
use of Amazon Web Services’ considerable data centre resources for applications
traditionally performed by a supercomputer. Google has donated a billion core-hours
to scientific research with its Google Exacycle for Visiting Faculty project. By making
use of spare computing capacity at their facilities, data centres can supply those that
cannot afford a dedicated supercomputer, or would only require it infrequently, with the
capability to run computationally intensive applications.
The growing realisation occurring world-wide that supercomputing is a contributor to
economic competitiveness has meant that supercomputing has become a priority for
governments. Supercomputers make it possible for complex systems to be modelled,
such as those from biological, engineering, mathematical, geological and cosmological
realms. This may either not be possible any other way (how else could one model the
universe?), or the research may require experiments that are more expensive, complex
or unethical (this would particularly apply in medical research). The ability to perform
such large-scale modelling gives researchers in the countries that have a greater
supercomputing capacity an advantage over researchers in those countries that do not
have access to such infrastructure.
2
3. Energy Efficiency — The New Problem
The next supercomputing goal is to reach exascale performance which requires
1018 floating point operations per second (FLOPS) or about 10 million times more
powerful than the central processing unit (CPU) in a powerful home PC. However, even
though supercomputers are becoming more efficient, power consumption has been
identified as a major hurdle on the road to exascale computing. Moving, storing and
processing information in the supercomputer takes energy, which costs money. The
more information is processed, the greater the energy requirement (if overall efficiency
remains the same). The quantity of available computing can be limited by energy or
power. Power is the energy use per unit time, commonly measured in watts (W). The
higher the power requirement, the more difficult it becomes to supply. Ultimately, the
economic benefits of the work performed on these machines must exceed the cost of
building and running them, so increasing the FLOPS/W efficiency is becoming critical
for the growth of supercomputing.
Recent attempts to reduce power requirements have applied graphics processing units
(GPU) and mobile processors similar to those in smart phones. Projects such as the
307 200 core, 20 PFLOPS Titan supercomputer collaboration are using a combination
of CPUs and GPUs, while the Mont-Blanc exascale project will attempt to use ARM
mobile processors. Some supercomputers may eventually use a combination of ARM
and GPU chips. The Mont-Blanc project must achieve 50 GFLOPS/W to remain within
the 20 MW energy budget (which may only be possible by 2022), but the current
efficiency leader only achieves 2.03 GFLOPS/W. An exascale computer at that
efficiency would require 477 MW of power, which would cost approximately $280 million
(at industrial rates) to $430 million (at commercial rates) per year to supply in the USA
(assuming 100% uptime). It will be a tall ask to achieve the 25x improvement, but there
is a company that claims that they can achieve 70 GFLOPS/W. However, it is uncertain
if these 64 core, 800 MHz, single-precision chips could maintain such impressive
energy efficiency when part of a large, high-performance computer.
3
4. Figure 1: K computer in Japan.
Currently, the three leading supercomputers are the K computer (10.5 PFLOPS)
, Tianhe-1A (2.6 PFLOPS) and Jaguar (1.8 PFLOPS). The K computer requires at
least 12.7 MW of power to run (an efficiency of 830 MFLOPS/W) at an annual cost of
about $10 million. Since the average US household uses 10 896 kWh of electricity a
year, that is enough to power over 10 000 homes! The top 10 supercomputers require
enough power for 36 600 homes, while the 273 supercomputers (of the TOP500) of
which we know the power requirements (a total of 162 MW) could power over 130 000
homes. Note that these costs do not even take into account the difficulties and cost of
supplying so much power to a single location (equivalent to the peak power output of
small power stations).
For all the electricity that supercomputers use, data centers use far more. The data
center companies are using so much power that they are having a major impact on
energy consumption world wide. Data centers are consuming an estimated 31 GW of
power in 2011 (272 TWh of energy per year, assuming 100% up-time), approximately
equal to the energy consumption of Spain and responsible for approximately 200
million metric tons of CO2 emissions per year (equivalent to approximately 40 million
cars, 470 million barrels of oil or 48 coal power plants). In comparison, the fastest 500
supercomputers only use approximately a quarter of a gigawatt of power in total.
4
5. Figure 2: A Open Compute Project (Facebook) data centre.
In the next year the power usage by data centers is expected to grow by a further 4.4
GW, with the USA growing by 820 MW, and Germany and China contributing 500 MW
each. World electricity production in 2009 was an estimated 20 005 TWh, so data
centers alone use approximately 1.4% of the world’s electricity output. The impact
of the energy requirements of computers has become such a large issue for some in
the industry that they have started energy-related groups such as The Green Grid and
Climate Savers Computing. These groups aim to provide a platform for the various
stakeholders to collaborate on improving the energy efficiency of anything from PCs to
data centres in order do reduce their environmental impact. Google’s own investment
into renewable energy is nearing $1 billion (though these types of investments may
be coming to and end) and Facebook is building a 28 000 square meter, 120 MW (of
hydroelectric power), $760 million server farm near the arctic circle, far away from any
major cities, to make use of the cold climate to cool the servers.
To better understand this problem, consider the number of power sources required.
The table below shows the number of power sources of different types that would be
required to power all data centres in 2011. For example, powering the world’s data
centres would require 14 times the current total solar electricity production (excludes
private solar electricity production), or 22 average-sized nuclear power plants.
5
6. Table 1: The number of power plants required to power the world’s data centres
(generally the largest power stations of their kind).
Power Source Location Type Output Year No. Req.
Total Electricity World Electricity 20 055 TWh 2009 0.014
Total Hydroelectric World Hydroelectric 3 329 TWh 2009 0.082
Total Nuclear World Nuclear 2 697 TWh 2009 0.108
Total Wind World Wind 194.4 GW 2011 0.159
Three Gorges Dam China Hydroelectric 22.5 GW 1.38
Total Geothermal World Geothermal 10.7 GW1 2010 2.89
Kashiwazaki-Kariwa Japan Nuclear 8.2 GW 3.77
Surgutskaya GRES-2 Russia Natural Gas 5.6 GW 6.46
Datang Tuoketuo China Coal 5.4 GW 5.74
Kashima Japan Fuel Oil 4.4 GW 7.05
Total Solar World Solar 21.4 TWh 2009 13.6
Eesti Estonia Oil Shale 1.6 GW 19.3
Average Nuclear USA Nuclear 12.4 TWh 2010 21.9
Shatura Russia Peat 1.1 GW 28.2
Roscoe USA Wind 782 MW 39.7
The Geysers USA Geothermal 725 MW 42.8
Alholmens Kraft Finland Biofuel 265 MW 117
Sihwa Lake South
Korea
Tidal 254 MW 122
Nimitz Class Aircraft Carrier USA Nuclear 194 MW 160
Solnova Spain Concentrated solar
thermal
150 MW 207
Finsterwalde Germany Flat-Panel Photovoltaic 80.7 MW 384
1 Holm A, Blodgett L, Jennejohn D & Gawell, 2010, Geothermal Energy: International Market
Update, Report for the Geothermal Energy Association.
6
7. The Case for Superconducting Computing
What is the best solution to the ever-increasing electricity usage by data centers and
supercomputers? Superconducting computing might provide a solution. Even if
current technology achieves the order of magnitude improvement in power efficiency
that is needed for exascale computing, superconducting circuits may be an order of
magnitude even more efficient. In this context, researchers at Northrop Grumman have
developed a new family of superconducting electronic circuits called reciprocal quantum
logic (RQL) which they claim to be 300 times more efficient than projected nanoscale
semiconductors, even when including cooling. However, the details of this calculation
are not published in the paper in which the claim was made, making it difficult to know
how well this figure would translate to a large-scale computer. At HYPRES efforts are
underway to develop energy-efficient versions of RSFQ called ERSFQ and eSFQ.
High speed is another potential advantage, given that the speed record for a
superconducting circuit is 770 GHz, compared with a speed of 96.6 GHz for a similar
semiconducting circuit. Higher speeds may be useful for problems that are not easily
parallelised, or for those that have long critical paths to completion. While a single
superconducting circuit cannot achieve both maximum speed and maximum efficiency,
the technology now has the versatility necessary to serve a wide range of large scale
computing needs.
Figure 3: A wafer of superconducting chips.
A 2005 superconducting technology assessment found that there were “no significant
outstanding research issues for RSFQ technologies” for the development of
supercomputers based on the technology. However, this study focused on speed
first, with energy efficiency being a secondary focus which means that a new study is
needed to focus on the energy efficiency prospects of superconducting technology.
A 2008 report on technology challenges in achieving exascale systems identified
superconducting circuits as the most studied alternative to silicon-based logic
7
8. technology.
The Road Forward
Superconducting chips already outperform semiconducting chips in terms of
speed and possibly efficiency. Convincing the LSC fraternity of the value of
superconducting technology to their cause should be high on the agenda of groups
in the superconductivity community. It will not be easy because some of the chips
currently used were designed for consumer applications (or are based on those chips),
thereby reducing the percentage of the development cost that the LSC industry needs
to carry. However, potential energy savings and speed improvements may help build
the case for them to take a closer look at superconducting technology.
There are some key objectives the superconducting circuit community need to
achieve before the LSC community could consider superconducting computers as
an alternative. Firstly, the maximum number of JJs on a chip will have to increase
— currently superconducting circuits may have the order of 105 JJs on a chip, but
CMOS chips have approximately 109 transistors. Then a complete processor needs
to be manufactured, followed by a complete processor with memory and floating-point
capabilities. Then multiple processors will have to be connected together to show
that the memory and interconnects all work, that software can be written for such a
machine, and that the results are reliable. Eventually multi-core processors and higher-
bit compatibility will be required. Perhaps leaders in the superconducting circuit industry
could establish a clear roadmap to a complete superconducting computer.
Conclusion
Current large-scale computing is reaching a point where the power consumption is
becoming too great. Superconducting circuits could be the technology that makes
the growth in computing power sustainable. Efforts have already begun to develop
superconducting chips for high performance computing applications. A Japanese
collaboration is also looking into building a high performance, energy efficient computer
from superconducting technology. It is a good start in the right direction.
Acknowledgements
The author would like to thank M. Volkmann, H.R. Gerber and others for their valuable
input.
8