SlideShare una empresa de Scribd logo
1 de 22
DEEP FREEZE                                                           ™
And Nano-Cooling Technology:
Next Generation Solution for Cooling Blade Servers



                   CASE STUDY & VALUE
                      PROPOSITION



                                                                                         September 2011
                                                                                           Presented by:




©2011 Mobee Communications, LTD, Deep Freeze Technology Corporation, NGN Data Services
Corporation & Global Access Advisors, LLC. All rights reserved. The information contained in this article is
proprietary. As such, no part of this article may be copied or reproduced in any means, electronic or
otherwise, without the express permission of Deep Freeze Technology Corporation, NGN Data Services
Corporation and Global Access Advisors, LLC.
DEEP FREEZE™

  DEEP FREEZE™

The Deep Freeze™ blade server cooling concept is the chief component of
an overall data center design strategy. It is a “cold-plate” technology
evolution that is both “closed-loop” and “chassis-based”, representing the
most efficient cooling design in the market.

       It is an independent (after-market, retro-fit product), closed cooling
       system (100% self-contained), based on cold plate technology
       (metal composites as the cooling structure), using ionized water (non-
       damaging electro-sensitive fluid) circulating through a chassis-based
       (an actual blade-server component, replacing the relatively inefficient
       fan) cooling design.




Beneficial Highlights

As a product, Deep Freeze™ is an
      •Independent unit
      •Based upon a Retro-fit design
      •Designed as an after-market unit serving the $6B blade server industry


The Deep Freeze™ product will
      •Drastically reduce the maintenance costs of blade server management
      •Dramatically increase the efficiency of the data center computing power
      •Obviate the need for expensive CRAC units and other equipment
      •Facilitate the “green design” for data center construction and operation




                                                           www.globalaccessadvisors.com
DEEP FREEZE™


  Deep Freeze™ and Nano-Cooling Technology:
  Next Generation Solution for Cooling Blade Servers1

  Executive Summary

  It takes about 1,000 times more energy                         Cooling approach that does not
  to move a data byte                                            involve connectivity to external
  than it does to perform a computation                          CRACs (computer room air
  with it once it arrives.                                       conditioners), chillers, etc.: their Direct
  Additionally, the time taken to complete                       Liquid Cooling Platform of the HP
  a computation is                                               Matrix Blade Technology (Patent
  currently limited by how long it takes to                      Pending): Deep Freeze™.
  do the moving- all of
  which produces heat, which slows the                           The Deep Freeze™ design obviates
  processing even further.                                       the need for CFD analysis and
                                                                 maximizes the power to cooling ratio,
  Air-cooling can go some way to                                 while saving real estate. They have
  removing this heat, which is why                               prototyped a plug-in, after market
  computers have fans inside. Emerging                           replacement for the CPU fans that
  technologies have begun                                        incorporates the liquid cooling
  to substitute liquid-cooling agents                            strategy. The beta model expertly
  because a given volume of                                      works for the HP ProLiant2 series, ,
  water can hold 4,000 times more waste                          but they have designs for horizontal
  heat than air.                                                 (across Corporate offerings) and
                                                                 vertical (different alloy, densities, etc.)
  Deep Freeze Technology Corp has                                applications.
  developed a revolutionary liquid-



1 A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades.
Each blade is an individual server, often dedicated to a single application. The blades are literally servers on a card,
containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA)
and other input/output (IO) ports. Blades typically come with two advanced technology attachments (ATAs) or SCSI
drives.

2 HP holds the number 1 position in the world-wide server market with a 31.5%factory revenue sharef for 1Q11. HP’s
10.8% revenue growth was led by increased demand for both their x86-based Proliant Servers and titanium-based
Integrity servers.




                                                                                    www.globalaccessadvisors.com
         1
DEEP FREEZE™

Background

Increasing operational expenses                                  But mixing hot and cold air is exactly
(energy costs3, space provisioning4,                             the wrong approach to cooling blade
etc.) are forcing companies to cool                              servers. Specific amounts of cold air
their data centers more efficiently. The                         need to be deployed to the blade rack
ubiquitous Blade Server5 exacerbates                             directly and quickly, while the heated
the problem. A single blade rack                                 air produced by energy consumption
consumes more than 25 kW-4 times                                 must be ventilated quickly away from
the kW required for a standard server.                           the rack.
Much of that energy is converted to
heat, so cooling blade servers                                   Because blade racks require more
presents its own unique sets of                                  precise ventilation, computational fluid
challenges for the temperature                                   dynamics (CFD) is often used to
maintenance strategies of a server                               model airflow movements through a
room or data center.                                             data center. By assessing the
                                                                 variables of the server area's physical
With traditional standard rack servers,                          properties and cooling capabilities,
cooling was often a function of                                  CFD can predict the appropriate
offsetting temperature variations: by                            airflow mixture between hot and cold
assessing hardware deployment, the                               air, and thus accurately predict the
simple calculation of heat produced                              amount of cold air necessary to cool
would yield the resulting “cool air”                             the datacenter and the most efficient
required to be pumped into the                                   pathways of cold air circulation directly
environment to maintain temperatures                             to the servers.
within the hardware's operating limits.




3 Blade servers allow more processing power in less rack space, simplifying cabling (up to an 85% reduction) and
reducing power consumption. The advantage of blade servers comes not only from the consolidation benefits of housing
several servers in a single chassis, but also from the consolidation of associated resources (like storage and networking
equipment) into a smaller architecture that can be managed through a single interface.

4 U is the standard unit of measure for designating the vertical usable space, or height of racks (metal frame
designed to hold hardware devices) and cabinets (enclosures with one or more doors). This unit of measurement
refers to the space between shelves on a rack. 1U is equal to 1.75 inches. For example, a rack designated as 20U,
has 20 rack spaces for equipment and has 35 (20 times 1.75) inches of vertical usable space. Rack and cabinet
spaces- and the equipment which fits into them- are measured in U.

5 The leading manufacturers in the $5.6B blade server technology market (in order of market-share): HP, IBM, Dell,
Cisco, Siemens, Fujitsu, Oracle, Sun, and NEC.



                                                                                    www.globalaccessadvisors.com
           2
DEEP FREEZE™



Performing CFD calculations can be            air, water is more efficient cooling
quite challenging for server arrays           agent. And because water goes
deploying both blade and traditional          straight to the server, there is no need
servers. Because blade servers                to factor hot-cold mixing or CFD.
require more directed cooling,
Mechanical engineers had to dispense          Liquid cooling is common in
with traditional datacenter airflow           supercomputing and high performance
cooling approaches. Reliance on               computing (HPC), where facility
“raised floors” concepts (cold air            operators manage computing clusters
pumped through perforated floors)             producing high heat loads. Rising heat
gave way to “in row cooling”                  densities have spurred predictions that
(alternating columns of cold and hot          liquid cooling would be more widely
air, with the cold air forced horizontally,   adopted, but some data center
from the back of the rack to the front).      managers remain wary of having
Currently, the two emerging trend             water near their equipment.
seems to be gaining favor: indirect
liquid cooling and direct immersion           While liquid cooling is a proven
techniques.                                   technology, it does require a fair
                                              degree of capital overhead. In
Indirect Liquid Cooling.                      addition to server room air ducts and
                                              electric sockets for cooling units, liquid
With liquid cooling, cold fluid- usually      cooling requires the installation of
water- is piped to a special water-           piping and failsafe systems (in case of
cooling heat sink, called a water block,      a leak or other malfunction). The real
on the processor. While a standard            estate benefits sought by the
heat sink has metal fins to increase its      utilization of blade server technology is
surface area with the air around the          often offset by the addition space
server, a water block consists of a           required by the extra cooling units
metal pipe that goes through a                needed for liquid cooling.
conductive metal block. The processor
heats the block; cold water travels into
the block, cooling it back down and
warming the water, which is then piped
out to a radiator, which cools it again.
As much better conductor of heat than




                                                             www.globalaccessadvisors.com
     3
DEEP FREEZE™


Direct Immersion Techniques.               change, new approaches to cooling
                                           are replacing traditional measures as
Direct immersion (submersibles) is         a best practice. Cooler parts last
another innovation in blade server         longer. When parts stay below the
cooling. Blade server racks are            specified maximum thermal limit they
entirely submersed into tubs of cooled,    operate more consistently and voltage
non-static mineral oils.                   fluctuations that can lead to data
                                           errors and crashes are minimized.
Though not entirely “revolutionary”,
proponents suggest that mineral oil        A 2010 survey of nearly 100 members
coolants have been “optimized for          of the Data Center Users' Group
data centers” and can support heat         revealed that data center managers'
loads of up to 100 kilowatts per 42U       top three concerns were density of
rack, far beyond current average heat      heat and power (83 percent),
loads of 4 to 8 watts a rack and high-     availability (52 percent), and space
density loads of 12 to 30 kilowatts per    constraints/growth (45 percent).
rack. These systems are designed to
comply with fire codes and the Clean       Answering these concerns requires an
Water Act, and integrate with standard     approach that delivers the required
power distribution units (PDUs) and        reliability and the flexibility to grow,
network switches.                          while providing the lowest cost of
                                           ownership possible.
Some mineral oil-style coolants can be
messy to maintain. Proponents say          The industry seeks a solution that:
the coolant can be drained for             • can effectively and efficiently
enclosure-level maintenance, and           address high-density zones
individual servers can be removed for
work. Detractors suggest that the real     • supports flexible options that are
estate utilization of horizontal bathing   easily scalable
tubs is substituting one issue for
another.                                   • incorporates technologies that
                                           improve energy efficiency, and
Rather than approaching the challenge
of exploding heat removal                  • become elements of a system that is
requirements using limited, traditional    easy to maintain and support
measures, what's needed is a shift in
approach. Because the constant in          Deep Freeze™ is one such viable
data center heat loads has been rapid,     solution.
unpredictable


                                                      www.globalaccessadvisors.com
    4
DEEP FREEZE™

Deep Freeze Technical Approach            viable contact cooling mechanisms.
                                          When air-cooled heat sinks are
Deep Freeze™ is predicated upon           inadequate, liquid-cooled cold plates
Cold Plate Technology (liquid-cooled      are the ideal high-performance heat
dissipater. The technology uses an        transfer solution.
aluminum or other alloy “plate”
containing internal tubing through        Cold plate technologies utilize varying
which a liquid coolant is forced, to      geometries and coolants to
absorb heat transferred to the plate by   provide a range of thermal
transistors and other components          performances. The lower the thermal
mounted on it. (Fig.1).                   resistance, the better the performance
                                          of the cold plate.
For Blade Servers, their compact
design and increasing power densities,    As a chassis or component-level
cold plates represent                     approach, Deep Freeze™ represents
                                          a superior technology.
         Fig. 1 Design




                                                    www.globalaccessadvisors.com
    5
DEEP FREEZE™

   The Deep Freeze™ design                    transferring the heat into the ambient
   contemplates using a copper fluid path     air outside the blade.
   and ionized water as the cooling fluid.
                                              The heat exchange takes place inside
   Like a car radiator, the liquid CPU        the cooled interior of the Deep Freeze
   design circulates a cooled liquid          unit and the cooled liquid travels back
   through a heat sink attached to the        into the blade through the heat sink
   blade processor. Deep Freeze™              module to continue the process. An
   technology uses ionized water- which       essential aspect of the Deep Freeze™
   acts as a heat sink- to pass through its   technology is that cooling occurs in a
   module. (Fig. 2).The heat is then          closed-coupled environment. This
   transferred from the hot processor to      allows the heat exchange between the
   the heat sink module. The hot liquid       nano-chiller and the Deep Freeze unit
   then moves through the Deep                without heating the room’s exterior.
   Freeze™ heat sink module and into its
   unit,




Fig. 2 Heat Dissipation Principle




                                                            www.globalaccessadvisors.com
         6
DEEP FREEZE™

       Benefits of Closed Loop Cooling

       There are basic fundamentals to contemporary data center management: (1)the
       higher power consumption of modern blade servers produces more heat 6; (2)
       almost all power consumed by rack-mounted equipment is converted to sensible
       heat; (3) which increases the temperature in the environment.


       A 2010 HP Technical Study7 surveyed the various cooling strategies and the effects
       upon a representative example of power consumption in 42U IT equipment rack:
       ProLiant DL160 G6 1U servers (42 servers @ 383 W per server). The cooling
       requirement was computed:

                               54,901 BTU/hr ÷ 12,000 BTU/hr per ton = 4.58 tons

       HP determined that the increasing heat loads created by the latest server systems
       require more aggressive cooling strategies than the traditional open-area approach.
       (Fig. 3).


                           Figure 3: Cooling strategies based on server density/power per rack (HP 2010)

                                             Supplemented
                                             data center
Density (nodes per rack)




                                             cooling

                                                                                                             Chassis/component
                                                        Cold/hot               Closed-loop cooling
                                                                                                             level cooling, future
                               Traditional              aisle
                                                                                                             cooling technologies
                               open-area                containment
                               cooling



                                      8                 16            24                 32             40

                                                                 Power (kW per rack)




       6 The sensible heat load is typically expressed in British Thermal Units per hour (BTU/hr) or watts, where 1 W
       equals 3.413 BTU/hr. The rack’s heat load in BTU/hr can be calculated as follows:

                               Heat Load = Power [W] × 3.413 BTU/hr per watt

       In the United States, cooling capacity is often expressed in "tons" of refrigeration, which is derived by dividing the
       sensible heat load by 12,000 BTU/hr per ton.

       7 “Cooling Strategies for IT Equipment” (September, 2010). Hewlett Packard Development Company.

                                                                                                 www.globalaccessadvisors.com
                           7
DEEP FREEZE™
Of the cooling strategies commercially
available, HP concluded that “Closed        IBM worked closely with Wolverine’s
Loop Cooling is “the best solution for      MicroCool Division to
high-density systems consuming many         develop innovative liquid cooling
kilowatts of power. These systems           components within this new high
have separate cool air distribution and     performance computer. It consumes
warm air return paths that are isolated     40 percent less energy compared to a
from the open room air. Closed-loop         similar system using air-cooling
systems typically use heat exchangers       technology.
that use chilled water for removing         The IBM Blade Server relies upon a
heat created by IT equipment. Since         proprietary MicroCool “cold plate” and
they are self-contained, closed-loop        integrated Wolverine copper liquid
cooling systems offer flexibility and are   cooling loops. The design claims to
adaptable to a wide range of locations      maintain an entire electronic footprint
and environments. Closed-loop               below 80 degrees C, with a 60
cooling systems can also                    degrees C inlet fluid made up of water.
accommodate a wide range of server          There have been several challenges
and power densities.”                       to IBM’s “green assertions”.
                                            (www.flickr.com/photos/ibm_research_zurich/453732638
                                            3/)
Deep Freeze™ is a closed-loop
cooling design which performs at the        The pilot operation also utilizes the
chassis or component level. It is the       waste heat from the computer, to
“future cooling design” predicted in the    warm the external structures. IBM
HP study.                                   collaborated for over three years, at a
                                            cost in excess of $22M.
Competitive landscape                       2. Google:
                                            In 2009, Google patented a “server
Currently, there are three entrants in      sandwich” design in which two
the blade server cooling technology         motherboards are attached to either
space suggesting variations on the          side of a liquid-cooled heat sink.
“future cooling design” theme.              Drawings submitted with the patent
                                            illustrate Google’s design and how it
1. IBM:                                     might be implemented in a data
In July 2010, IBM announced the             center.
successful pilot launch of its newly        http://www.datacenterknowledge.com/archives/2010/07/0
developed Zero Emissions Liquid             6/google-patents-liquid-cooled-server-sandwich/

Cooled, Blade Server: Aquasar.
                                            The diagram depicts the “server
                                            sandwich” assemblies deployed in a



                                                          www.globalaccessadvisors.com
     8
DEEP FREEZE™
row of racks, with each assembly           up to 80 kilowatts per rack in some
connected to supply and return pipes       implementations. Google’s patent says
for liquid cooling, which are housed in    the heat sink could be configured to
the hot aisle. The illustration of the     use either chilled water or a liquid
heat sink provides a view of the           coolant.
grooves where processors for the
motherboards would fit onto either         3. Hardcore Computers:
side, allowing the heat sink to cool two   In April 2010, Hardcore Computer,
motherboards at once.                      Inc., announced the launch of Liquid
                                           Blade™, the first Total Liquid
The liquid cooling design patented by      Submersion blade server. The initial
Google features custom motherboards        Liquid Blade™ server platform, which
with components attached to both           is powered by two Intel® 5500 or 5600
sides. Heat-generating processors are      series Xeon® processors running on
placed on the side of the motherboard      an Intel® S5500HV reference
that comes in contact with the heat        motherboard, addresses several major
sink, which is an aluminum block           datacenter challenges: power, cooling
containing tubes that carry cooling        and space. Hardcore Computer’s
fluid. Components that produce less        patented technology submerges all of
heat, like memory chips, are placed on     the heat-producing components of the
the opposite side of the motherboard,      Liquid Blade.
adjacent to fans that provide air-
cooling for these components.              Hardcore’s Liquid Blade technology
                                           contends that it is 1350 times more
Motherboards are attached to either        efficient than air at heat removal and
side of the heat sink, creating a          increases compute density because
“server sandwich” assembly that can        far less space is required between
be housed in a rack. The diagrams          components. With little heat escaping
submitted with the patent depict           into the datacenter, the need for air
cabinets filled with 10 of these liquid-   conditioning and air moving equipment
cooled assemblies, suggesting each         is minimized. The net result is a much
takes up 4U in a rack.                     smaller physical and carbon footprint
                                           for the datacenter. As an added
Similar to Cold Plate Technology the       benefit, no need for special fire
heat sink can cool heat loads of           protection systems to cover the
                                           servers. This because all of the blade
                                           components are submerged so




                                                     www.globalaccessadvisors.com
     9
DEEP FREEZE™
there is no oxygen exposure. Without oxygen there is no potential for sustainable
fire.

The major criticism of the Hardcore Computer product is that it relies extensively on
proprietary parts. In order to upgrade, most parts will need to be purchased through
Hardcore Computer, thus limiting the consumer options. Other complaints range
from “sizeable footprint” to “messy operations”.


Deep Freeze™: A Comparative Study

In October 2010, Harcore Computer engaged a Third party vendor to develop
construction budgets for two 3.2 megawatt datacenters: one using air-cooling
architecture and the other equipped with Liquid Blade™ servers. In that study of
equivalent compute power facilities, each datacenter was designed to house 6,397
servers utilizing the same 2-CPU-per server technology.

Not surprisingly, the Liquid Blade™ servers significantly outperformed its
competitor in the three key areas: physical space needs, power density and cooling
load.

In June 2011, Deep Freeze™ was comparatively tested, using the same
methodology and criteria. The results are as follows:

Comparative Capacity

Deep Freeze requires far fewer physical servers due to virtualization methodology.




    Table 1. Comparative Capacity Analysis




                                                            www.globalaccessadvisors.com
    10
DEEP FREEZE™
  Auxiliary Equipment

  Deep Freeze’s™ closed loop, chassis/component design obviates the need for
  substantial investments in traditional CRAC architectures.

Table 2. Auxiliary Equipment Comparison




                                                           www.globalaccessadvisors.com
          11
DEEP FREEZE™
Cooling Load

As each source of heat generation is examined and compared, the cooling load of the
exterior walls, host, lighting servers and the UPS system were accounted. As
demonstrated in Table 4, the chiller capacity for both the Liquid Blade™ and Air-Cooled
suites is substantially greater than the chiller-less solutions- the primary reason being
that Deep Freeze™ facilitates a smaller footprint to cool, while still being able to
maintain the data center computing capacity.

  Table 4. Cooling Load Comparison




Power Consumption

Comparing the Deep Freeze™ design with the Air-Cooled and Liquid Blade™ suites
illustrates that the cost from auxiliary equipment is significantly higher.

  Table 5. Power Consumption




                                                               www.globalaccessadvisors.com
       12
DEEP FREEZE™

 Construction Costs

 Table 6 compares construction costs. Though all suites have identical computing
 capacity, the capital costs to construct both the Air-cooled and the Liquid Blade™
 Architecture is, on-average, 175% higher than the Deep Freeze™ suite.

    Table 6. Construction Costs Comparison




Total Cost of Ownership (TCO)

The Deep Freeze™ approach to data center design, architecture and cooling
methodologies (after-market, retro-fit design) results in a significant overall savings in
estimated TCO. As a retro-fitted, after-market product, Deep Freeze™ units
(replacing the fans installed in blade-server manufacture) will substantially decrease
the TCO due to power efficiencies realized and cooling expenditures reduced.



         Table 7. TCO




                                                                   www.globalaccessadvisors.com
         13
DEEP FREEZE™


Deep Freeze™: The Value Proposition Realized
Green Design and the “Whole System Approach”



Power and cooling issues can be                          is the extent to which virtualization’s
articulated separately for the purpose                   entitlement can be multiplied if power
of explanation and analysis, but                         and cooling infrastructure is optimized
effective deployment of a total                          to align with the new, leaner IT profile.
virtualization solution requires a                       In addition to the financial savings
system-level view. The shift toward                      obtainable, these same power and
virtualization, with its new challenges                  cooling solutions answer a number of
for physical infrastructure, re-                         functionality and availability
emphasizes the need for integrated                       challenges presented by virtualiza-
solutions using a holistic approach- that                tion.”
is, consider everything together, and
make it work as a system8.                               Two major challenges that
                                                         virtualization poses to physical
All system components should                             infrastructure are the need for
communicate and interoperate.                            dynamic power and cooling systems,
Demands and capacities must be                           and the rack-level, real- time
managed in real time, preferably at the                  management of capacities.
rack level, to ensure efficiency.
                                                         These challenges have been met by
A recent and significant datacenter                      Deep Freeze’s™ closed-loop,
science study concluded that                             chassis/component cooling
“Virtualization is an undisputed leap                    architecture and its real-time capacity
forward in data center evolution- it                     management module. These solutions
saves energy, it increases computing                     are based on design principles that
throughput, it frees up floor space, it                  resolve functional challenges, reduce
facilitates load migration and disaster                  power consumption, and increase
recovery. Less well known                                efficiency.




 8 Niles, Suzanne. “Virtualization: Optimizing Power and Cooling to Maximize Benefits”, 2011. APC data Center
 Science Center.

 9 Ibid, at page 19.


                                                                           www.globalaccessadvisors.com
     14
DEEP FREEZE™
The comprehensive Deep Freeze™             infrastructure, the effective redirection
solution is a self-sufficient green-       of the solar production of electricity
energy data center that uses ultra         through the generator
efficient cooling methods for both         into the UPS packs, reduces the load
blade and structure design. This           on the generator system.
challenge is met by taming the cooling
plant’s energy consumption and by          This holistic technology uses no
designing a self-sufficient green          moving components and requires
building using alternative energy          minimal energy resources. Since the
solutions to offset auxiliary energy       Deep Freeze™ modules are not
requirements such as lighting devices,     participants in the consumption of
and by using solar energy.                 energy in the data center itself, the
                                           sole energy usage comes from
The second aspect involves cooling         computing power.
the blades at the CPU level- this being
the most efficient method to extract       In addition to offering a
heat from the blade, and more              comprehensive model for new green
importantly, from the rack. As part of     energy data centers, Deep Freeze is
the cooling solution, the UPS and          also capable of reducing upgrade
storage components were physically         expenditures on existing equipment
placed in a separate area in order to      making it the ideal solution for existing
control cooling with a variable airflow    data centers looking to drastically
and to maintain a constant                 reduce maintenance costs by
temperature in the surrounding space.      optimizing cooling without replacing
                                           costly equipment. Deep Freeze
Deep Freeze™ data centers account          closed-coupled liquid CPU cooling
for the integration of solar and natural   allows for optimization of space in
gas solutions as an integral part of the   existing data centers, resulting in an
self-sustained ability of the data         increase in energy savings and the
center. By utilizing grid-tie solar        elimination building additional space
systems and natural gas generators,        costs.
the load is reduced both on and off the
grid. By diverting the solar production
to the UPS network and by simulating
the generators on a grid-like




                                                      www.globalaccessadvisors.com
    15
DEEP FREEZE™

SIPNOC CASE STUDY
Antonis Valamontes, President, Mobee Communications, LTD


Mobee Communications, Ltd contracted NGN Data Services in 2010 to deploy the
Deep Freeze™ product in their SIPNOC and to design a Tier-3 class data center in
the United States. The data center had to reliably support a 1500 server computing
capacity, while integrating solar and natural gas options as part of a self-sustaining
micro grid design within a 1500 sq. ft. environmental footprint.

Mobee Communications, LTD is a venture-backed start-up that offers Mobile IP
telephony through its Virtual SIPNOC design. By designing their SIPNOC site with
Deep Freeze™ technology, NGN Data Services provided Mobee with the
confidence that Mobee’s environmental needs regarding power, cooling, humidity
and micro grid capability would be met.

The primary goal was to build a self-sustainable facility that could operate efficiently
on and off the grid. Building the facility in Florida presented its set of unique
challenges due to the intense heat and humidity levels. NGN’s solution was to
integrate solar energy and natural gas power generation as the primary sources of
energy, while designing the grid as a backup system; i.e., what could be considered
an “on-grid UPS”; making the grid available should we choose to use it, but not
mandatory.

In effect, NGN created the “micro grid”. The micro grid approach is extremely cost
efficient due to its ability to build excess energy and then push it to the grid.
Everything produced in the system is made for consumption and not for return.

The design incorporates a two-shell building- a building within a building- creating a
6” air pocket in between the outer and inner walls. The purpose of this approach
was to create a natural insulator- the same way feathers create tiny air pockets in
sleeping bags and comforters to insulate and reduce the escape of heat. For the
under-lining of the roof space, NGN used an “icing” approach, to create an R-factor
and further insulate the building to prevent any outside air from entering the
building.

The need to retain the computing capacity in a confined space, as well as Mobee’s
specific requirements for virtualization, made the HP Matrix blade system with the
C7000 enclosures our top choice. The HP Matrix’s superior




                                                              www.globalaccessadvisors.com
     16
DEEP FREEZE™

energy management solution aligned perfectly with the NGN Data Services green
energy data suite model- and by combining it with the Deep Freeze™ solution-
cooling optimization with no additional energy consumption costs were achieved.




While designing the server room NGN isolated the servers in their own space.
Every other accessory, hardware and storage device was designated for
assignment to a smaller space with a controlled air environment. When completed,
the network had an estimated storage capacity of 850 terabytes and a computing
capacity of over 900 virtual servers- all with dedicated NIC interfaces.

The data center server room where heat is generated by the blades is referred to
as the “hot room”; the adjacent room where the storage and UPS are located is
referred to as the “cold room.” The temperatures in both the hot and cold rooms are
maintained at a constant 70F. The temperature is controlled electronically by
variable airflow vents located throughout the building.




                                                           www.globalaccessadvisors.com
    17
DEEP FREEZE™

Since the Deep Freeze™ modules were deployed to extract heat at the CPU level,
the need to use large heavy chillers to cool down the server space was eliminated.
NGN anticipating the higher heat/temperature of the server room designed the
adjacent room to act as a natural heat exchanger and divided the two rooms with a
glass wall- resulting in an efficient heat “exchanger” incorporating an holistic
method of temperature control that required no additional energy consumption. As
a result, the glass became a natural heat exchanger transferring 14,000BTUs/hr of
heat, the equivalent of cooling 1900 virtual servers or two full racks of eight C7000
enclosures. NGN also selected a green, carbon-neutral fire suppression system
called Aero-K that creates zero ozone depletion, zero ecological hazards and zero
contribution to global warming.




A main objective of Mobee Communications, LTD was to become a premier mobile
IP carrier and to operate a globally distributed system. Our need for extensible grid
computing in a totally virtualized environment that could be rapidly deployed
anywhere in the world, with no loss in reliability or performance, was actualized by
the Deep Freeze Technology Corp’s green energy micro grid model.




                                                             www.globalaccessadvisors.com
    18
DEEP FREEZE™


 Conclusion
 Deep Freeze™ & “Green Data Center Architecture:
 The Value Proposition Defined


Temperature management is, in itself, a comprehensive solution for self-sustaining
green energy data centers. Deep Freeze’s™ “plug & play”,, retrofitted liquid cooling
technology provides an after- market, close-looped liquid cooling solution at the
CPU level. Deep Freeze™ obviates the need to replace existing blade servers and
reduces dependencies upon external CRAC architectures.

Deep Freeze™ technology is the prime cost-effective cooling technology in the
industry today, representing the paradigm shift in deploying and cooling high-
performance computing environments. No other cooling method delivers such a
marked reduction in cost, energy consumption and space, simultaneously providing
the ultimate green energy eco-friendly data suite solution.

Beyond the benefits of Deep Freeze™ as the ultimate unified solution to cooling
optimization and overhead cost reduction, the virtualization methodology and “green
data center architecture” saves money, increases computing power and conserves
energy. By offering environmentally and spatially conscious solutions, Deep
Freeze™ nano-chiller technology has become the next evolution in green energy
data centers.

The Deep Freeze™ CPU chilling technology + the NGN virtualization methodology
+ the NGN “green” Data Center architecture model = an across-the-board solution
for reducing costs, and operating with an environmental footprint of high
performance computing. Benefits include:

      • Environmentally-friendly approach/design
      • Enhanced space, performance, efficiency and liquid cooling
      • Energy selective- deployable in areas where energy is limited or expensive
      • Increases capacity of existing stand-alone data centers
      • Designs can be rapidly deployed as a “one-offs” or in pod-like units
      • Ideal for advanced military applications or natural disaster recovery efforts




                                                              www.globalaccessadvisors.com
      19
Contact Information
Deep Freeze™
c/o Global Access Advisors
info@globalaccessadvisors.com

Más contenido relacionado

La actualidad más candente

Blazing Fast Lustre Storage
Blazing Fast Lustre StorageBlazing Fast Lustre Storage
Blazing Fast Lustre StorageIntel IT Center
 
Extending the lifecycle of your storage area network
Extending the lifecycle of your storage area networkExtending the lifecycle of your storage area network
Extending the lifecycle of your storage area networkInterop
 
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...Principled Technologies
 
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...inside-BigData.com
 
Dell Lustre Storage Architecture Presentation - MBUG 2016
Dell Lustre Storage Architecture Presentation - MBUG 2016Dell Lustre Storage Architecture Presentation - MBUG 2016
Dell Lustre Storage Architecture Presentation - MBUG 2016Andrew Underwood
 
Dell EMC storage sc series
Dell EMC storage sc seriesDell EMC storage sc series
Dell EMC storage sc seriesHung Vu
 
VSP Mainframe Dynamic Tiering Performance Considerations
VSP Mainframe Dynamic Tiering Performance ConsiderationsVSP Mainframe Dynamic Tiering Performance Considerations
VSP Mainframe Dynamic Tiering Performance ConsiderationsHitachi Vantara
 
Offer faster access to critical data and achieve greater inline data reductio...
Offer faster access to critical data and achieve greater inline data reductio...Offer faster access to critical data and achieve greater inline data reductio...
Offer faster access to critical data and achieve greater inline data reductio...Principled Technologies
 
Learn the facts about replication in mainframe storage webinar
Learn the facts about replication in mainframe storage webinarLearn the facts about replication in mainframe storage webinar
Learn the facts about replication in mainframe storage webinarHitachi Vantara
 
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Principled Technologies
 
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...Principled Technologies
 
Dell PowerEdge R820 and R910 servers: Performance and reliability
Dell PowerEdge R820 and R910 servers: Performance and reliabilityDell PowerEdge R820 and R910 servers: Performance and reliability
Dell PowerEdge R820 and R910 servers: Performance and reliabilityPrincipled Technologies
 
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Principled Technologies
 
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...Principled Technologies
 
Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2BradDesAulniers2
 
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SAN
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SANIncreasing performance with the Dell PowerEdge FX2 and VMware Virtual SAN
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SANPrincipled Technologies
 
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01Lenovo Data Center
 

La actualidad más candente (20)

Blazing Fast Lustre Storage
Blazing Fast Lustre StorageBlazing Fast Lustre Storage
Blazing Fast Lustre Storage
 
Extending the lifecycle of your storage area network
Extending the lifecycle of your storage area networkExtending the lifecycle of your storage area network
Extending the lifecycle of your storage area network
 
Hitachi Data Services. Business Continuity
Hitachi Data Services. Business ContinuityHitachi Data Services. Business Continuity
Hitachi Data Services. Business Continuity
 
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...
Consolidating Oracle database servers onto Dell PowerEdge R920 running Oracle...
 
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...
Performance Comparison of Intel Enterprise Edition Lustre and HDFS for MapRed...
 
Dell Lustre Storage Architecture Presentation - MBUG 2016
Dell Lustre Storage Architecture Presentation - MBUG 2016Dell Lustre Storage Architecture Presentation - MBUG 2016
Dell Lustre Storage Architecture Presentation - MBUG 2016
 
GuideIT Delivery Design - File Shares
GuideIT Delivery Design - File SharesGuideIT Delivery Design - File Shares
GuideIT Delivery Design - File Shares
 
Dell EMC storage sc series
Dell EMC storage sc seriesDell EMC storage sc series
Dell EMC storage sc series
 
VSP Mainframe Dynamic Tiering Performance Considerations
VSP Mainframe Dynamic Tiering Performance ConsiderationsVSP Mainframe Dynamic Tiering Performance Considerations
VSP Mainframe Dynamic Tiering Performance Considerations
 
Offer faster access to critical data and achieve greater inline data reductio...
Offer faster access to critical data and achieve greater inline data reductio...Offer faster access to critical data and achieve greater inline data reductio...
Offer faster access to critical data and achieve greater inline data reductio...
 
Learn the facts about replication in mainframe storage webinar
Learn the facts about replication in mainframe storage webinarLearn the facts about replication in mainframe storage webinar
Learn the facts about replication in mainframe storage webinar
 
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...
Workstation heat and power usage: Lenovo ThinkStation P500 vs. HP Z440 Workst...
 
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...
VDI performance comparison: Dell PowerEdge FX2 and FC430 servers with VMware ...
 
Dell PowerEdge R820 and R910 servers: Performance and reliability
Dell PowerEdge R820 and R910 servers: Performance and reliabilityDell PowerEdge R820 and R910 servers: Performance and reliability
Dell PowerEdge R820 and R910 servers: Performance and reliability
 
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...
Back up deduplicated data in less time with the Dell DR6000 Disk Backup Appli...
 
Managing Hyper-V on a Compellent SAN
Managing Hyper-V on a Compellent SANManaging Hyper-V on a Compellent SAN
Managing Hyper-V on a Compellent SAN
 
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...
Maximizing Oracle Database performance with Intel SSD DC P3600 Series NVMe SS...
 
Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2Elastic storage in the cloud session 5224 final v2
Elastic storage in the cloud session 5224 final v2
 
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SAN
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SANIncreasing performance with the Dell PowerEdge FX2 and VMware Virtual SAN
Increasing performance with the Dell PowerEdge FX2 and VMware Virtual SAN
 
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01Demartek Lenovo Storage S3200  i a mixed workload environment_2016-01
Demartek Lenovo Storage S3200 i a mixed workload environment_2016-01
 

Destacado

Bod analysis schematic diagram
Bod analysis schematic diagramBod analysis schematic diagram
Bod analysis schematic diagramJanine Samelo
 
Easy guide and handbook of air conditioning and refrigeration repair
Easy guide and handbook of air conditioning and refrigeration repairEasy guide and handbook of air conditioning and refrigeration repair
Easy guide and handbook of air conditioning and refrigeration repairKhurram Qazi
 
Refrigeration and Air Conditioning
Refrigeration and Air ConditioningRefrigeration and Air Conditioning
Refrigeration and Air Conditioningfahrenheit
 
IGARSS2011_SWOT_mesoscale_morrow.ppt
IGARSS2011_SWOT_mesoscale_morrow.pptIGARSS2011_SWOT_mesoscale_morrow.ppt
IGARSS2011_SWOT_mesoscale_morrow.pptgrssieee
 
Autonomic Computing and Self Healing Systems
Autonomic Computing and Self Healing SystemsAutonomic Computing and Self Healing Systems
Autonomic Computing and Self Healing SystemsWilliam Chipman
 
Bar cohen-jpl-biomimetic-robots
Bar cohen-jpl-biomimetic-robotsBar cohen-jpl-biomimetic-robots
Bar cohen-jpl-biomimetic-robotsHau Nguyen
 
Self healing-systems
Self healing-systemsSelf healing-systems
Self healing-systemsSKORDEMIR
 
Study of the Antimatter at Large Hadron Collider
Study of the Antimatter at Large Hadron ColliderStudy of the Antimatter at Large Hadron Collider
Study of the Antimatter at Large Hadron ColliderSSA KPI
 
Sustainable Engineering - Practical Studies for Building a Sustainable Society
Sustainable Engineering - Practical Studies for Building a Sustainable Society Sustainable Engineering - Practical Studies for Building a Sustainable Society
Sustainable Engineering - Practical Studies for Building a Sustainable Society QuEST Forum
 
Presentation on solenoid valve
Presentation on solenoid valvePresentation on solenoid valve
Presentation on solenoid valveSiya Agarwal
 
An Overview of Microfluidics
An Overview of MicrofluidicsAn Overview of Microfluidics
An Overview of MicrofluidicsRajan Arora
 
Bladeless wind turbine
Bladeless wind turbineBladeless wind turbine
Bladeless wind turbineRevathi C
 
Blade-less Wind Turbine
Blade-less Wind TurbineBlade-less Wind Turbine
Blade-less Wind TurbineNeel Patel
 
Self-healing Materials
Self-healing MaterialsSelf-healing Materials
Self-healing MaterialsReset_co
 
Hovercraft presentation-The Future is Now!
Hovercraft presentation-The Future is Now!Hovercraft presentation-The Future is Now!
Hovercraft presentation-The Future is Now!Riaz Zalil
 
Design and Engineering Module-5: Product Centered Design and User Centered De...
Design and Engineering Module-5: Product Centered Design and User Centered De...Design and Engineering Module-5: Product Centered Design and User Centered De...
Design and Engineering Module-5: Product Centered Design and User Centered De...Naseel Ibnu Azeez
 
Tesla bladeless turbine
Tesla bladeless turbineTesla bladeless turbine
Tesla bladeless turbineRajeshwera
 

Destacado (20)

Bod analysis schematic diagram
Bod analysis schematic diagramBod analysis schematic diagram
Bod analysis schematic diagram
 
Easy guide and handbook of air conditioning and refrigeration repair
Easy guide and handbook of air conditioning and refrigeration repairEasy guide and handbook of air conditioning and refrigeration repair
Easy guide and handbook of air conditioning and refrigeration repair
 
Refrigeration and Air Conditioning
Refrigeration and Air ConditioningRefrigeration and Air Conditioning
Refrigeration and Air Conditioning
 
IGARSS2011_SWOT_mesoscale_morrow.ppt
IGARSS2011_SWOT_mesoscale_morrow.pptIGARSS2011_SWOT_mesoscale_morrow.ppt
IGARSS2011_SWOT_mesoscale_morrow.ppt
 
Autonomic Computing and Self Healing Systems
Autonomic Computing and Self Healing SystemsAutonomic Computing and Self Healing Systems
Autonomic Computing and Self Healing Systems
 
Bar cohen-jpl-biomimetic-robots
Bar cohen-jpl-biomimetic-robotsBar cohen-jpl-biomimetic-robots
Bar cohen-jpl-biomimetic-robots
 
Self healing-systems
Self healing-systemsSelf healing-systems
Self healing-systems
 
Study of the Antimatter at Large Hadron Collider
Study of the Antimatter at Large Hadron ColliderStudy of the Antimatter at Large Hadron Collider
Study of the Antimatter at Large Hadron Collider
 
Sustainable Engineering - Practical Studies for Building a Sustainable Society
Sustainable Engineering - Practical Studies for Building a Sustainable Society Sustainable Engineering - Practical Studies for Building a Sustainable Society
Sustainable Engineering - Practical Studies for Building a Sustainable Society
 
Presentation on solenoid valve
Presentation on solenoid valvePresentation on solenoid valve
Presentation on solenoid valve
 
An Overview of Microfluidics
An Overview of MicrofluidicsAn Overview of Microfluidics
An Overview of Microfluidics
 
Nano Fluids
Nano FluidsNano Fluids
Nano Fluids
 
Bladeless wind turbine
Bladeless wind turbineBladeless wind turbine
Bladeless wind turbine
 
Blade-less Wind Turbine
Blade-less Wind TurbineBlade-less Wind Turbine
Blade-less Wind Turbine
 
Self-healing Materials
Self-healing MaterialsSelf-healing Materials
Self-healing Materials
 
Hovercraft presentation-The Future is Now!
Hovercraft presentation-The Future is Now!Hovercraft presentation-The Future is Now!
Hovercraft presentation-The Future is Now!
 
Hovercraft
HovercraftHovercraft
Hovercraft
 
Design and Engineering Module-5: Product Centered Design and User Centered De...
Design and Engineering Module-5: Product Centered Design and User Centered De...Design and Engineering Module-5: Product Centered Design and User Centered De...
Design and Engineering Module-5: Product Centered Design and User Centered De...
 
Hovercraft seminar report
Hovercraft seminar report Hovercraft seminar report
Hovercraft seminar report
 
Tesla bladeless turbine
Tesla bladeless turbineTesla bladeless turbine
Tesla bladeless turbine
 

Similar a Deep Freeze - Design

Dcd 2012 liquid cooling presentation
Dcd 2012 liquid cooling presentationDcd 2012 liquid cooling presentation
Dcd 2012 liquid cooling presentationScicomm
 
white_paper_Liquid-Cooling-Solutions.pdf
white_paper_Liquid-Cooling-Solutions.pdfwhite_paper_Liquid-Cooling-Solutions.pdf
white_paper_Liquid-Cooling-Solutions.pdfwberring
 
Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs
	 Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs	 Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs
Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUsWayne Caswell
 
Optimize power and cooling final
Optimize power and cooling finalOptimize power and cooling final
Optimize power and cooling finalViridity Software
 
Datacenter Efficiency: Building for High Density
Datacenter Efficiency: Building for High DensityDatacenter Efficiency: Building for High Density
Datacenter Efficiency: Building for High DensityChristopher Kelley
 
Optimize power and cooling final 1
Optimize power and cooling final 1Optimize power and cooling final 1
Optimize power and cooling final 1Viridity Software
 
cooling system in computer -air / water cooling
cooling system in computer -air / water coolingcooling system in computer -air / water cooling
cooling system in computer -air / water coolingIbrahem Batta
 
Blade Server Technology Daniel Nilles Herzing
Blade Server Technology  Daniel Nilles  HerzingBlade Server Technology  Daniel Nilles  Herzing
Blade Server Technology Daniel Nilles HerzingDaniel Nilles
 
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Principled Technologies
 
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Principled Technologies
 
MT25 Server technology trends, workload impacts, and the Dell Point of View
MT25 Server technology trends, workload impacts, and the Dell Point of ViewMT25 Server technology trends, workload impacts, and the Dell Point of View
MT25 Server technology trends, workload impacts, and the Dell Point of ViewDell EMC World
 
Apc cooling solutions
Apc cooling solutionsApc cooling solutions
Apc cooling solutionspeperoca
 
Blade server technology report
Blade server technology reportBlade server technology report
Blade server technology reportSarath Thalekkara
 
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCExceeding the Limits of Air Cooling to Unlock Greater Potential in HPC
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
 

Similar a Deep Freeze - Design (20)

Dcd 2012 liquid cooling presentation
Dcd 2012 liquid cooling presentationDcd 2012 liquid cooling presentation
Dcd 2012 liquid cooling presentation
 
white_paper_Liquid-Cooling-Solutions.pdf
white_paper_Liquid-Cooling-Solutions.pdfwhite_paper_Liquid-Cooling-Solutions.pdf
white_paper_Liquid-Cooling-Solutions.pdf
 
Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs
	 Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs	 Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs
Dell H2C™ Technology: Hybrid Cooling for Overclocked CPUs
 
Optimize power and cooling final
Optimize power and cooling finalOptimize power and cooling final
Optimize power and cooling final
 
Datacenter Efficiency: Building for High Density
Datacenter Efficiency: Building for High DensityDatacenter Efficiency: Building for High Density
Datacenter Efficiency: Building for High Density
 
Lenovo HPC Strategy Update
Lenovo HPC Strategy UpdateLenovo HPC Strategy Update
Lenovo HPC Strategy Update
 
Optimize power and cooling final 1
Optimize power and cooling final 1Optimize power and cooling final 1
Optimize power and cooling final 1
 
cooling system in computer -air / water cooling
cooling system in computer -air / water coolingcooling system in computer -air / water cooling
cooling system in computer -air / water cooling
 
Data center cooling
Data center coolingData center cooling
Data center cooling
 
Blade Server Technology Daniel Nilles Herzing
Blade Server Technology  Daniel Nilles  HerzingBlade Server Technology  Daniel Nilles  Herzing
Blade Server Technology Daniel Nilles Herzing
 
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
 
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
Move your private cloud to Dell EMC PowerEdge C6420 server nodes and boost Ap...
 
MT25 Server technology trends, workload impacts, and the Dell Point of View
MT25 Server technology trends, workload impacts, and the Dell Point of ViewMT25 Server technology trends, workload impacts, and the Dell Point of View
MT25 Server technology trends, workload impacts, and the Dell Point of View
 
Blade server
Blade serverBlade server
Blade server
 
Apc cooling solutions
Apc cooling solutionsApc cooling solutions
Apc cooling solutions
 
Challenges in Managing IT Infrastructure
Challenges in Managing IT InfrastructureChallenges in Managing IT Infrastructure
Challenges in Managing IT Infrastructure
 
Blade
BladeBlade
Blade
 
Blade server technology report
Blade server technology reportBlade server technology report
Blade server technology report
 
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCExceeding the Limits of Air Cooling to Unlock Greater Potential in HPC
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC
 
Tcc cooling
Tcc coolingTcc cooling
Tcc cooling
 

Más de Geopay.me Inc.

The network society (1.4v)
The network society (1.4v)The network society (1.4v)
The network society (1.4v)Geopay.me Inc.
 
Solutions for Bitcoin ATM Kiosks
Solutions for Bitcoin ATM KiosksSolutions for Bitcoin ATM Kiosks
Solutions for Bitcoin ATM KiosksGeopay.me Inc.
 
Geopay.me - What We Do!
Geopay.me - What We Do!Geopay.me - What We Do!
Geopay.me - What We Do!Geopay.me Inc.
 
User Unification on OneBill
User Unification on OneBillUser Unification on OneBill
User Unification on OneBillGeopay.me Inc.
 
Deep Freeze Technologies Wind Turbines
Deep Freeze Technologies Wind TurbinesDeep Freeze Technologies Wind Turbines
Deep Freeze Technologies Wind TurbinesGeopay.me Inc.
 
NGN Hybrid Cloud Matrix
NGN Hybrid Cloud MatrixNGN Hybrid Cloud Matrix
NGN Hybrid Cloud MatrixGeopay.me Inc.
 
OneBill/ACS Lifeline Program
OneBill/ACS Lifeline ProgramOneBill/ACS Lifeline Program
OneBill/ACS Lifeline ProgramGeopay.me Inc.
 
How to configure the MobeeVoice Android App
How to configure the MobeeVoice Android AppHow to configure the MobeeVoice Android App
How to configure the MobeeVoice Android AppGeopay.me Inc.
 
OneBill Velatel Group China Presentation
OneBill Velatel Group China PresentationOneBill Velatel Group China Presentation
OneBill Velatel Group China PresentationGeopay.me Inc.
 

Más de Geopay.me Inc. (13)

The network society (1.4v)
The network society (1.4v)The network society (1.4v)
The network society (1.4v)
 
Solutions for Bitcoin ATM Kiosks
Solutions for Bitcoin ATM KiosksSolutions for Bitcoin ATM Kiosks
Solutions for Bitcoin ATM Kiosks
 
Geopay.me - What We Do!
Geopay.me - What We Do!Geopay.me - What We Do!
Geopay.me - What We Do!
 
User Unification on OneBill
User Unification on OneBillUser Unification on OneBill
User Unification on OneBill
 
VOIP PANGIA
VOIP PANGIAVOIP PANGIA
VOIP PANGIA
 
Deep Freeze Technologies Wind Turbines
Deep Freeze Technologies Wind TurbinesDeep Freeze Technologies Wind Turbines
Deep Freeze Technologies Wind Turbines
 
NGN Hybrid Cloud Matrix
NGN Hybrid Cloud MatrixNGN Hybrid Cloud Matrix
NGN Hybrid Cloud Matrix
 
OneBill/ACS Lifeline Program
OneBill/ACS Lifeline ProgramOneBill/ACS Lifeline Program
OneBill/ACS Lifeline Program
 
A..C.C.E.S.S - SC EMD
A..C.C.E.S.S -  SC EMDA..C.C.E.S.S -  SC EMD
A..C.C.E.S.S - SC EMD
 
NUERMN - FEMA
NUERMN - FEMANUERMN - FEMA
NUERMN - FEMA
 
How to configure the MobeeVoice Android App
How to configure the MobeeVoice Android AppHow to configure the MobeeVoice Android App
How to configure the MobeeVoice Android App
 
Mobee India Future
Mobee India FutureMobee India Future
Mobee India Future
 
OneBill Velatel Group China Presentation
OneBill Velatel Group China PresentationOneBill Velatel Group China Presentation
OneBill Velatel Group China Presentation
 

Deep Freeze - Design

  • 1. DEEP FREEZE ™ And Nano-Cooling Technology: Next Generation Solution for Cooling Blade Servers CASE STUDY & VALUE PROPOSITION September 2011 Presented by: ©2011 Mobee Communications, LTD, Deep Freeze Technology Corporation, NGN Data Services Corporation & Global Access Advisors, LLC. All rights reserved. The information contained in this article is proprietary. As such, no part of this article may be copied or reproduced in any means, electronic or otherwise, without the express permission of Deep Freeze Technology Corporation, NGN Data Services Corporation and Global Access Advisors, LLC.
  • 2. DEEP FREEZE™ DEEP FREEZE™ The Deep Freeze™ blade server cooling concept is the chief component of an overall data center design strategy. It is a “cold-plate” technology evolution that is both “closed-loop” and “chassis-based”, representing the most efficient cooling design in the market. It is an independent (after-market, retro-fit product), closed cooling system (100% self-contained), based on cold plate technology (metal composites as the cooling structure), using ionized water (non- damaging electro-sensitive fluid) circulating through a chassis-based (an actual blade-server component, replacing the relatively inefficient fan) cooling design. Beneficial Highlights As a product, Deep Freeze™ is an •Independent unit •Based upon a Retro-fit design •Designed as an after-market unit serving the $6B blade server industry The Deep Freeze™ product will •Drastically reduce the maintenance costs of blade server management •Dramatically increase the efficiency of the data center computing power •Obviate the need for expensive CRAC units and other equipment •Facilitate the “green design” for data center construction and operation www.globalaccessadvisors.com
  • 3. DEEP FREEZE™ Deep Freeze™ and Nano-Cooling Technology: Next Generation Solution for Cooling Blade Servers1 Executive Summary It takes about 1,000 times more energy Cooling approach that does not to move a data byte involve connectivity to external than it does to perform a computation CRACs (computer room air with it once it arrives. conditioners), chillers, etc.: their Direct Additionally, the time taken to complete Liquid Cooling Platform of the HP a computation is Matrix Blade Technology (Patent currently limited by how long it takes to Pending): Deep Freeze™. do the moving- all of which produces heat, which slows the The Deep Freeze™ design obviates processing even further. the need for CFD analysis and maximizes the power to cooling ratio, Air-cooling can go some way to while saving real estate. They have removing this heat, which is why prototyped a plug-in, after market computers have fans inside. Emerging replacement for the CPU fans that technologies have begun incorporates the liquid cooling to substitute liquid-cooling agents strategy. The beta model expertly because a given volume of works for the HP ProLiant2 series, , water can hold 4,000 times more waste but they have designs for horizontal heat than air. (across Corporate offerings) and vertical (different alloy, densities, etc.) Deep Freeze Technology Corp has applications. developed a revolutionary liquid- 1 A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is an individual server, often dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA) and other input/output (IO) ports. Blades typically come with two advanced technology attachments (ATAs) or SCSI drives. 2 HP holds the number 1 position in the world-wide server market with a 31.5%factory revenue sharef for 1Q11. HP’s 10.8% revenue growth was led by increased demand for both their x86-based Proliant Servers and titanium-based Integrity servers. www.globalaccessadvisors.com 1
  • 4. DEEP FREEZE™ Background Increasing operational expenses But mixing hot and cold air is exactly (energy costs3, space provisioning4, the wrong approach to cooling blade etc.) are forcing companies to cool servers. Specific amounts of cold air their data centers more efficiently. The need to be deployed to the blade rack ubiquitous Blade Server5 exacerbates directly and quickly, while the heated the problem. A single blade rack air produced by energy consumption consumes more than 25 kW-4 times must be ventilated quickly away from the kW required for a standard server. the rack. Much of that energy is converted to heat, so cooling blade servers Because blade racks require more presents its own unique sets of precise ventilation, computational fluid challenges for the temperature dynamics (CFD) is often used to maintenance strategies of a server model airflow movements through a room or data center. data center. By assessing the variables of the server area's physical With traditional standard rack servers, properties and cooling capabilities, cooling was often a function of CFD can predict the appropriate offsetting temperature variations: by airflow mixture between hot and cold assessing hardware deployment, the air, and thus accurately predict the simple calculation of heat produced amount of cold air necessary to cool would yield the resulting “cool air” the datacenter and the most efficient required to be pumped into the pathways of cold air circulation directly environment to maintain temperatures to the servers. within the hardware's operating limits. 3 Blade servers allow more processing power in less rack space, simplifying cabling (up to an 85% reduction) and reducing power consumption. The advantage of blade servers comes not only from the consolidation benefits of housing several servers in a single chassis, but also from the consolidation of associated resources (like storage and networking equipment) into a smaller architecture that can be managed through a single interface. 4 U is the standard unit of measure for designating the vertical usable space, or height of racks (metal frame designed to hold hardware devices) and cabinets (enclosures with one or more doors). This unit of measurement refers to the space between shelves on a rack. 1U is equal to 1.75 inches. For example, a rack designated as 20U, has 20 rack spaces for equipment and has 35 (20 times 1.75) inches of vertical usable space. Rack and cabinet spaces- and the equipment which fits into them- are measured in U. 5 The leading manufacturers in the $5.6B blade server technology market (in order of market-share): HP, IBM, Dell, Cisco, Siemens, Fujitsu, Oracle, Sun, and NEC. www.globalaccessadvisors.com 2
  • 5. DEEP FREEZE™ Performing CFD calculations can be air, water is more efficient cooling quite challenging for server arrays agent. And because water goes deploying both blade and traditional straight to the server, there is no need servers. Because blade servers to factor hot-cold mixing or CFD. require more directed cooling, Mechanical engineers had to dispense Liquid cooling is common in with traditional datacenter airflow supercomputing and high performance cooling approaches. Reliance on computing (HPC), where facility “raised floors” concepts (cold air operators manage computing clusters pumped through perforated floors) producing high heat loads. Rising heat gave way to “in row cooling” densities have spurred predictions that (alternating columns of cold and hot liquid cooling would be more widely air, with the cold air forced horizontally, adopted, but some data center from the back of the rack to the front). managers remain wary of having Currently, the two emerging trend water near their equipment. seems to be gaining favor: indirect liquid cooling and direct immersion While liquid cooling is a proven techniques. technology, it does require a fair degree of capital overhead. In Indirect Liquid Cooling. addition to server room air ducts and electric sockets for cooling units, liquid With liquid cooling, cold fluid- usually cooling requires the installation of water- is piped to a special water- piping and failsafe systems (in case of cooling heat sink, called a water block, a leak or other malfunction). The real on the processor. While a standard estate benefits sought by the heat sink has metal fins to increase its utilization of blade server technology is surface area with the air around the often offset by the addition space server, a water block consists of a required by the extra cooling units metal pipe that goes through a needed for liquid cooling. conductive metal block. The processor heats the block; cold water travels into the block, cooling it back down and warming the water, which is then piped out to a radiator, which cools it again. As much better conductor of heat than www.globalaccessadvisors.com 3
  • 6. DEEP FREEZE™ Direct Immersion Techniques. change, new approaches to cooling are replacing traditional measures as Direct immersion (submersibles) is a best practice. Cooler parts last another innovation in blade server longer. When parts stay below the cooling. Blade server racks are specified maximum thermal limit they entirely submersed into tubs of cooled, operate more consistently and voltage non-static mineral oils. fluctuations that can lead to data errors and crashes are minimized. Though not entirely “revolutionary”, proponents suggest that mineral oil A 2010 survey of nearly 100 members coolants have been “optimized for of the Data Center Users' Group data centers” and can support heat revealed that data center managers' loads of up to 100 kilowatts per 42U top three concerns were density of rack, far beyond current average heat heat and power (83 percent), loads of 4 to 8 watts a rack and high- availability (52 percent), and space density loads of 12 to 30 kilowatts per constraints/growth (45 percent). rack. These systems are designed to comply with fire codes and the Clean Answering these concerns requires an Water Act, and integrate with standard approach that delivers the required power distribution units (PDUs) and reliability and the flexibility to grow, network switches. while providing the lowest cost of ownership possible. Some mineral oil-style coolants can be messy to maintain. Proponents say The industry seeks a solution that: the coolant can be drained for • can effectively and efficiently enclosure-level maintenance, and address high-density zones individual servers can be removed for work. Detractors suggest that the real • supports flexible options that are estate utilization of horizontal bathing easily scalable tubs is substituting one issue for another. • incorporates technologies that improve energy efficiency, and Rather than approaching the challenge of exploding heat removal • become elements of a system that is requirements using limited, traditional easy to maintain and support measures, what's needed is a shift in approach. Because the constant in Deep Freeze™ is one such viable data center heat loads has been rapid, solution. unpredictable www.globalaccessadvisors.com 4
  • 7. DEEP FREEZE™ Deep Freeze Technical Approach viable contact cooling mechanisms. When air-cooled heat sinks are Deep Freeze™ is predicated upon inadequate, liquid-cooled cold plates Cold Plate Technology (liquid-cooled are the ideal high-performance heat dissipater. The technology uses an transfer solution. aluminum or other alloy “plate” containing internal tubing through Cold plate technologies utilize varying which a liquid coolant is forced, to geometries and coolants to absorb heat transferred to the plate by provide a range of thermal transistors and other components performances. The lower the thermal mounted on it. (Fig.1). resistance, the better the performance of the cold plate. For Blade Servers, their compact design and increasing power densities, As a chassis or component-level cold plates represent approach, Deep Freeze™ represents a superior technology. Fig. 1 Design www.globalaccessadvisors.com 5
  • 8. DEEP FREEZE™ The Deep Freeze™ design transferring the heat into the ambient contemplates using a copper fluid path air outside the blade. and ionized water as the cooling fluid. The heat exchange takes place inside Like a car radiator, the liquid CPU the cooled interior of the Deep Freeze design circulates a cooled liquid unit and the cooled liquid travels back through a heat sink attached to the into the blade through the heat sink blade processor. Deep Freeze™ module to continue the process. An technology uses ionized water- which essential aspect of the Deep Freeze™ acts as a heat sink- to pass through its technology is that cooling occurs in a module. (Fig. 2).The heat is then closed-coupled environment. This transferred from the hot processor to allows the heat exchange between the the heat sink module. The hot liquid nano-chiller and the Deep Freeze unit then moves through the Deep without heating the room’s exterior. Freeze™ heat sink module and into its unit, Fig. 2 Heat Dissipation Principle www.globalaccessadvisors.com 6
  • 9. DEEP FREEZE™ Benefits of Closed Loop Cooling There are basic fundamentals to contemporary data center management: (1)the higher power consumption of modern blade servers produces more heat 6; (2) almost all power consumed by rack-mounted equipment is converted to sensible heat; (3) which increases the temperature in the environment. A 2010 HP Technical Study7 surveyed the various cooling strategies and the effects upon a representative example of power consumption in 42U IT equipment rack: ProLiant DL160 G6 1U servers (42 servers @ 383 W per server). The cooling requirement was computed: 54,901 BTU/hr ÷ 12,000 BTU/hr per ton = 4.58 tons HP determined that the increasing heat loads created by the latest server systems require more aggressive cooling strategies than the traditional open-area approach. (Fig. 3). Figure 3: Cooling strategies based on server density/power per rack (HP 2010) Supplemented data center Density (nodes per rack) cooling Chassis/component Cold/hot Closed-loop cooling level cooling, future Traditional aisle cooling technologies open-area containment cooling 8 16 24 32 40 Power (kW per rack) 6 The sensible heat load is typically expressed in British Thermal Units per hour (BTU/hr) or watts, where 1 W equals 3.413 BTU/hr. The rack’s heat load in BTU/hr can be calculated as follows: Heat Load = Power [W] × 3.413 BTU/hr per watt In the United States, cooling capacity is often expressed in "tons" of refrigeration, which is derived by dividing the sensible heat load by 12,000 BTU/hr per ton. 7 “Cooling Strategies for IT Equipment” (September, 2010). Hewlett Packard Development Company. www.globalaccessadvisors.com 7
  • 10. DEEP FREEZE™ Of the cooling strategies commercially available, HP concluded that “Closed IBM worked closely with Wolverine’s Loop Cooling is “the best solution for MicroCool Division to high-density systems consuming many develop innovative liquid cooling kilowatts of power. These systems components within this new high have separate cool air distribution and performance computer. It consumes warm air return paths that are isolated 40 percent less energy compared to a from the open room air. Closed-loop similar system using air-cooling systems typically use heat exchangers technology. that use chilled water for removing The IBM Blade Server relies upon a heat created by IT equipment. Since proprietary MicroCool “cold plate” and they are self-contained, closed-loop integrated Wolverine copper liquid cooling systems offer flexibility and are cooling loops. The design claims to adaptable to a wide range of locations maintain an entire electronic footprint and environments. Closed-loop below 80 degrees C, with a 60 cooling systems can also degrees C inlet fluid made up of water. accommodate a wide range of server There have been several challenges and power densities.” to IBM’s “green assertions”. (www.flickr.com/photos/ibm_research_zurich/453732638 3/) Deep Freeze™ is a closed-loop cooling design which performs at the The pilot operation also utilizes the chassis or component level. It is the waste heat from the computer, to “future cooling design” predicted in the warm the external structures. IBM HP study. collaborated for over three years, at a cost in excess of $22M. Competitive landscape 2. Google: In 2009, Google patented a “server Currently, there are three entrants in sandwich” design in which two the blade server cooling technology motherboards are attached to either space suggesting variations on the side of a liquid-cooled heat sink. “future cooling design” theme. Drawings submitted with the patent illustrate Google’s design and how it 1. IBM: might be implemented in a data In July 2010, IBM announced the center. successful pilot launch of its newly http://www.datacenterknowledge.com/archives/2010/07/0 developed Zero Emissions Liquid 6/google-patents-liquid-cooled-server-sandwich/ Cooled, Blade Server: Aquasar. The diagram depicts the “server sandwich” assemblies deployed in a www.globalaccessadvisors.com 8
  • 11. DEEP FREEZE™ row of racks, with each assembly up to 80 kilowatts per rack in some connected to supply and return pipes implementations. Google’s patent says for liquid cooling, which are housed in the heat sink could be configured to the hot aisle. The illustration of the use either chilled water or a liquid heat sink provides a view of the coolant. grooves where processors for the motherboards would fit onto either 3. Hardcore Computers: side, allowing the heat sink to cool two In April 2010, Hardcore Computer, motherboards at once. Inc., announced the launch of Liquid Blade™, the first Total Liquid The liquid cooling design patented by Submersion blade server. The initial Google features custom motherboards Liquid Blade™ server platform, which with components attached to both is powered by two Intel® 5500 or 5600 sides. Heat-generating processors are series Xeon® processors running on placed on the side of the motherboard an Intel® S5500HV reference that comes in contact with the heat motherboard, addresses several major sink, which is an aluminum block datacenter challenges: power, cooling containing tubes that carry cooling and space. Hardcore Computer’s fluid. Components that produce less patented technology submerges all of heat, like memory chips, are placed on the heat-producing components of the the opposite side of the motherboard, Liquid Blade. adjacent to fans that provide air- cooling for these components. Hardcore’s Liquid Blade technology contends that it is 1350 times more Motherboards are attached to either efficient than air at heat removal and side of the heat sink, creating a increases compute density because “server sandwich” assembly that can far less space is required between be housed in a rack. The diagrams components. With little heat escaping submitted with the patent depict into the datacenter, the need for air cabinets filled with 10 of these liquid- conditioning and air moving equipment cooled assemblies, suggesting each is minimized. The net result is a much takes up 4U in a rack. smaller physical and carbon footprint for the datacenter. As an added Similar to Cold Plate Technology the benefit, no need for special fire heat sink can cool heat loads of protection systems to cover the servers. This because all of the blade components are submerged so www.globalaccessadvisors.com 9
  • 12. DEEP FREEZE™ there is no oxygen exposure. Without oxygen there is no potential for sustainable fire. The major criticism of the Hardcore Computer product is that it relies extensively on proprietary parts. In order to upgrade, most parts will need to be purchased through Hardcore Computer, thus limiting the consumer options. Other complaints range from “sizeable footprint” to “messy operations”. Deep Freeze™: A Comparative Study In October 2010, Harcore Computer engaged a Third party vendor to develop construction budgets for two 3.2 megawatt datacenters: one using air-cooling architecture and the other equipped with Liquid Blade™ servers. In that study of equivalent compute power facilities, each datacenter was designed to house 6,397 servers utilizing the same 2-CPU-per server technology. Not surprisingly, the Liquid Blade™ servers significantly outperformed its competitor in the three key areas: physical space needs, power density and cooling load. In June 2011, Deep Freeze™ was comparatively tested, using the same methodology and criteria. The results are as follows: Comparative Capacity Deep Freeze requires far fewer physical servers due to virtualization methodology. Table 1. Comparative Capacity Analysis www.globalaccessadvisors.com 10
  • 13. DEEP FREEZE™ Auxiliary Equipment Deep Freeze’s™ closed loop, chassis/component design obviates the need for substantial investments in traditional CRAC architectures. Table 2. Auxiliary Equipment Comparison www.globalaccessadvisors.com 11
  • 14. DEEP FREEZE™ Cooling Load As each source of heat generation is examined and compared, the cooling load of the exterior walls, host, lighting servers and the UPS system were accounted. As demonstrated in Table 4, the chiller capacity for both the Liquid Blade™ and Air-Cooled suites is substantially greater than the chiller-less solutions- the primary reason being that Deep Freeze™ facilitates a smaller footprint to cool, while still being able to maintain the data center computing capacity. Table 4. Cooling Load Comparison Power Consumption Comparing the Deep Freeze™ design with the Air-Cooled and Liquid Blade™ suites illustrates that the cost from auxiliary equipment is significantly higher. Table 5. Power Consumption www.globalaccessadvisors.com 12
  • 15. DEEP FREEZE™ Construction Costs Table 6 compares construction costs. Though all suites have identical computing capacity, the capital costs to construct both the Air-cooled and the Liquid Blade™ Architecture is, on-average, 175% higher than the Deep Freeze™ suite. Table 6. Construction Costs Comparison Total Cost of Ownership (TCO) The Deep Freeze™ approach to data center design, architecture and cooling methodologies (after-market, retro-fit design) results in a significant overall savings in estimated TCO. As a retro-fitted, after-market product, Deep Freeze™ units (replacing the fans installed in blade-server manufacture) will substantially decrease the TCO due to power efficiencies realized and cooling expenditures reduced. Table 7. TCO www.globalaccessadvisors.com 13
  • 16. DEEP FREEZE™ Deep Freeze™: The Value Proposition Realized Green Design and the “Whole System Approach” Power and cooling issues can be is the extent to which virtualization’s articulated separately for the purpose entitlement can be multiplied if power of explanation and analysis, but and cooling infrastructure is optimized effective deployment of a total to align with the new, leaner IT profile. virtualization solution requires a In addition to the financial savings system-level view. The shift toward obtainable, these same power and virtualization, with its new challenges cooling solutions answer a number of for physical infrastructure, re- functionality and availability emphasizes the need for integrated challenges presented by virtualiza- solutions using a holistic approach- that tion.” is, consider everything together, and make it work as a system8. Two major challenges that virtualization poses to physical All system components should infrastructure are the need for communicate and interoperate. dynamic power and cooling systems, Demands and capacities must be and the rack-level, real- time managed in real time, preferably at the management of capacities. rack level, to ensure efficiency. These challenges have been met by A recent and significant datacenter Deep Freeze’s™ closed-loop, science study concluded that chassis/component cooling “Virtualization is an undisputed leap architecture and its real-time capacity forward in data center evolution- it management module. These solutions saves energy, it increases computing are based on design principles that throughput, it frees up floor space, it resolve functional challenges, reduce facilitates load migration and disaster power consumption, and increase recovery. Less well known efficiency. 8 Niles, Suzanne. “Virtualization: Optimizing Power and Cooling to Maximize Benefits”, 2011. APC data Center Science Center. 9 Ibid, at page 19. www.globalaccessadvisors.com 14
  • 17. DEEP FREEZE™ The comprehensive Deep Freeze™ infrastructure, the effective redirection solution is a self-sufficient green- of the solar production of electricity energy data center that uses ultra through the generator efficient cooling methods for both into the UPS packs, reduces the load blade and structure design. This on the generator system. challenge is met by taming the cooling plant’s energy consumption and by This holistic technology uses no designing a self-sufficient green moving components and requires building using alternative energy minimal energy resources. Since the solutions to offset auxiliary energy Deep Freeze™ modules are not requirements such as lighting devices, participants in the consumption of and by using solar energy. energy in the data center itself, the sole energy usage comes from The second aspect involves cooling computing power. the blades at the CPU level- this being the most efficient method to extract In addition to offering a heat from the blade, and more comprehensive model for new green importantly, from the rack. As part of energy data centers, Deep Freeze is the cooling solution, the UPS and also capable of reducing upgrade storage components were physically expenditures on existing equipment placed in a separate area in order to making it the ideal solution for existing control cooling with a variable airflow data centers looking to drastically and to maintain a constant reduce maintenance costs by temperature in the surrounding space. optimizing cooling without replacing costly equipment. Deep Freeze Deep Freeze™ data centers account closed-coupled liquid CPU cooling for the integration of solar and natural allows for optimization of space in gas solutions as an integral part of the existing data centers, resulting in an self-sustained ability of the data increase in energy savings and the center. By utilizing grid-tie solar elimination building additional space systems and natural gas generators, costs. the load is reduced both on and off the grid. By diverting the solar production to the UPS network and by simulating the generators on a grid-like www.globalaccessadvisors.com 15
  • 18. DEEP FREEZE™ SIPNOC CASE STUDY Antonis Valamontes, President, Mobee Communications, LTD Mobee Communications, Ltd contracted NGN Data Services in 2010 to deploy the Deep Freeze™ product in their SIPNOC and to design a Tier-3 class data center in the United States. The data center had to reliably support a 1500 server computing capacity, while integrating solar and natural gas options as part of a self-sustaining micro grid design within a 1500 sq. ft. environmental footprint. Mobee Communications, LTD is a venture-backed start-up that offers Mobile IP telephony through its Virtual SIPNOC design. By designing their SIPNOC site with Deep Freeze™ technology, NGN Data Services provided Mobee with the confidence that Mobee’s environmental needs regarding power, cooling, humidity and micro grid capability would be met. The primary goal was to build a self-sustainable facility that could operate efficiently on and off the grid. Building the facility in Florida presented its set of unique challenges due to the intense heat and humidity levels. NGN’s solution was to integrate solar energy and natural gas power generation as the primary sources of energy, while designing the grid as a backup system; i.e., what could be considered an “on-grid UPS”; making the grid available should we choose to use it, but not mandatory. In effect, NGN created the “micro grid”. The micro grid approach is extremely cost efficient due to its ability to build excess energy and then push it to the grid. Everything produced in the system is made for consumption and not for return. The design incorporates a two-shell building- a building within a building- creating a 6” air pocket in between the outer and inner walls. The purpose of this approach was to create a natural insulator- the same way feathers create tiny air pockets in sleeping bags and comforters to insulate and reduce the escape of heat. For the under-lining of the roof space, NGN used an “icing” approach, to create an R-factor and further insulate the building to prevent any outside air from entering the building. The need to retain the computing capacity in a confined space, as well as Mobee’s specific requirements for virtualization, made the HP Matrix blade system with the C7000 enclosures our top choice. The HP Matrix’s superior www.globalaccessadvisors.com 16
  • 19. DEEP FREEZE™ energy management solution aligned perfectly with the NGN Data Services green energy data suite model- and by combining it with the Deep Freeze™ solution- cooling optimization with no additional energy consumption costs were achieved. While designing the server room NGN isolated the servers in their own space. Every other accessory, hardware and storage device was designated for assignment to a smaller space with a controlled air environment. When completed, the network had an estimated storage capacity of 850 terabytes and a computing capacity of over 900 virtual servers- all with dedicated NIC interfaces. The data center server room where heat is generated by the blades is referred to as the “hot room”; the adjacent room where the storage and UPS are located is referred to as the “cold room.” The temperatures in both the hot and cold rooms are maintained at a constant 70F. The temperature is controlled electronically by variable airflow vents located throughout the building. www.globalaccessadvisors.com 17
  • 20. DEEP FREEZE™ Since the Deep Freeze™ modules were deployed to extract heat at the CPU level, the need to use large heavy chillers to cool down the server space was eliminated. NGN anticipating the higher heat/temperature of the server room designed the adjacent room to act as a natural heat exchanger and divided the two rooms with a glass wall- resulting in an efficient heat “exchanger” incorporating an holistic method of temperature control that required no additional energy consumption. As a result, the glass became a natural heat exchanger transferring 14,000BTUs/hr of heat, the equivalent of cooling 1900 virtual servers or two full racks of eight C7000 enclosures. NGN also selected a green, carbon-neutral fire suppression system called Aero-K that creates zero ozone depletion, zero ecological hazards and zero contribution to global warming. A main objective of Mobee Communications, LTD was to become a premier mobile IP carrier and to operate a globally distributed system. Our need for extensible grid computing in a totally virtualized environment that could be rapidly deployed anywhere in the world, with no loss in reliability or performance, was actualized by the Deep Freeze Technology Corp’s green energy micro grid model. www.globalaccessadvisors.com 18
  • 21. DEEP FREEZE™ Conclusion Deep Freeze™ & “Green Data Center Architecture: The Value Proposition Defined Temperature management is, in itself, a comprehensive solution for self-sustaining green energy data centers. Deep Freeze’s™ “plug & play”,, retrofitted liquid cooling technology provides an after- market, close-looped liquid cooling solution at the CPU level. Deep Freeze™ obviates the need to replace existing blade servers and reduces dependencies upon external CRAC architectures. Deep Freeze™ technology is the prime cost-effective cooling technology in the industry today, representing the paradigm shift in deploying and cooling high- performance computing environments. No other cooling method delivers such a marked reduction in cost, energy consumption and space, simultaneously providing the ultimate green energy eco-friendly data suite solution. Beyond the benefits of Deep Freeze™ as the ultimate unified solution to cooling optimization and overhead cost reduction, the virtualization methodology and “green data center architecture” saves money, increases computing power and conserves energy. By offering environmentally and spatially conscious solutions, Deep Freeze™ nano-chiller technology has become the next evolution in green energy data centers. The Deep Freeze™ CPU chilling technology + the NGN virtualization methodology + the NGN “green” Data Center architecture model = an across-the-board solution for reducing costs, and operating with an environmental footprint of high performance computing. Benefits include: • Environmentally-friendly approach/design • Enhanced space, performance, efficiency and liquid cooling • Energy selective- deployable in areas where energy is limited or expensive • Increases capacity of existing stand-alone data centers • Designs can be rapidly deployed as a “one-offs” or in pod-like units • Ideal for advanced military applications or natural disaster recovery efforts www.globalaccessadvisors.com 19
  • 22. Contact Information Deep Freeze™ c/o Global Access Advisors info@globalaccessadvisors.com