This document summarizes the design and operations of a large data center facility. Key aspects include ambient air cooling that saves over 11 million gallons of water per year, water-side economizers that save an additional 2 million gallons per year, and generators that provide backup power within 6 seconds of an outage. The facility has a highly resilient fiber network with connectivity to other major hubs. The pod configuration and cabling pathways were designed for flexibility, manageability, and to accommodate growth over time. Extensive policies and training modules help achieve consistent operations.
8. Ambient air cooling saves 11.8M gallons of water per year with an associated water bill
savings of $50,000/yr. Brent’s white paper on data center ambient air cooling was
published in the Data Center Journal and 7 x 24 Exchange Magazine.
The project also uses water size economizers saving an additional 2M gallons of water
per year with an associated annual $50,000 water bill savings. These design aspects
earned LEED innovation in design credit.
The facility has no raised floor saving hundreds of thousands in CAPEX. The nearly14
foot tall fan wall units and every motor in the facility don’t just use variable frequency
drives but ultra high efficiency versions of VFD’s. The motors speed up and slow down
based on the atmospheric pressure differential between the computer hot and cold
isles. The computers are never starved for air cooling and the fans don’t work any
harder than they need to saving dramatically on cooling. 85% of the year, the facility
only uses 36KW to cool the entire IT load. As with any fluid dynamic reducing plenum
friction reduces the amount of fan energy required. For this reason, the building itself
acts as the plenum reducing friction to almost nothing.
The facility also uses atomizers for latent cooling thus extending the ambient air cooling
hours beyond 85% of the year. To achieve water independence, the facility has N+1
chillers as a backup to the fluid coolers. The facility uses a 75 degree and 20 to 50% RH
set points. The mechanical BMS orchestrates the sequence of operations with access to
5,000 data points.
8
9. The facility has power feeds from diverse substations and switching gear. The generator
implementation resulted in a case study with Brent’s interview published in Mission
Critical Magazine and the vendor’s internal training video series. The generators step in
within 6 seconds of a power anomaly with integrated parallel gear. The UPS battery
backup can run for 10 minutes at full load. The UPS’s are DSP controlled IGBT based
running at an amazing 96% efficiency even at lower loads. The savings from no raised
floor paid for the overhead power bus. This allows us to provision power changes at a
moment’s notice without the assistance of an electrician. The CPDU’s are painted with
either red or white school colors to identify power side A or B. The facility is 208V with
no neutral wire. The electrical design is based on a block redundant architecture
because of the facility size and Tier 3 requirements. Each component was factory tested
to tight performance tolerances and then was unit and system commissioned multiple
times onsite. The electrical equipment design collects 5,000 data points using
modbusTCP to ensure continuous operation and tuning.
9
10. The facility greatly exceeds the seismic requirements for an “essential facility”. Other confidential
aspects of the facility are impressive. The facility has no elevation change throughout the entire
equipment lifecycle. The facility has generous receiving, staging and build cages for the tenants with
centrally controlled network access. The facility was built with workflows in mind including the diverse
maintenance crews, facility administration staff and tenants.
1398 fiber strands reach the data center from many diverse paths, providers, technologies and
equipment. One technology used is DWDM capable of 80 wave lengths per fiber pair. We have mostly
10G with some 100G circuits. The carrier facility entrances and connections are physically diverse and
redundantly separated across the room. And then the highly resilient carrier facility is duplicated a
hundred feet away for even greater fault tolerance. The facility has direct fiber connectivity to the other
major carrier hotels. This site has become a highly desirable carrier hotel with capabilities not matched
elsewhere in the state.
The cabling is TIA942 compliant with some notable improvements. The MDA in the computer room is
physically split between each redundant side for both cabling and equipment. The cable pathways and
facilities are designed to accommodate 10x growth. The plant is 100G ready. The pathways include many
novel solutions to accommodate cabinet seismic isolation, layout flexibility, manageability and security.
The power and computer cabling complete systems were engineered together. As a direct result of our
work the world’s largest cabinet manufacturer incorporated our custom design into their core products as
their new standard solution. The leading carrier fiber frame manufacturer code names their new core
product after a team member.
The pods are arranged in two rows of fifteen cabinets. The pod configuration approximates 1% air bypass.
We have provisioned up to four CPU’s at 60A, 3Phase, 208V each. The network is dual center of row. The
cabling pathways are designed to include inside the cabinets. We designed and fabricated many custom
parts throughout the facility. NAS and SAN is all mounted on seismic isolation bases.
Consistent outcomes require consistent behaviors. Data center policies, procedures, standards and
training modules have been created to achieve consistent outcomes.
10