SlideShare una empresa de Scribd logo
1 de 11
Descargar para leer sin conexión
Data Centers
Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures
into PANDUIT Physical Layer Infrastructure Solutions




                                                               1
Introduction
The growing speed and footprint of data centers is challenging IT Managers to effectively budget and develop reliable,
high-performance, secure, and scalable network infrastructure environments. This growth is having a direct impact on the
amount of power and cooling required to support overall data center demands. Delivering reliable power and directly cooling
the sources that are consuming the majority of the power can be extremely difficult if the data center is not planned correctly.

This design guide examines how physical infrastructure designs can support a variety of network layer topologies in
order to achieve a truly integrated physical layer infrastructure. By understanding the network architecture governing the
arrangement of switches and servers throughout the data center, network stakeholders can map out a secure and scalable
infrastructure design to support current applications and meet anticipated bandwidth requirements and transmission speeds.

The core of this guide presents a virtual walk through the data center network architecture, outlining the relationships of key
physical layer elements including switches, servers, power, thermal management, racks/cabinets, cabling media, and cable
management. The deployment of Cisco hardware in two different access models are addressed: Top of Rack (ToR) and
End of Row (EoR).



Top of Mind Data Center Issues                                   Developing the Integrated
                                                                 Infrastructure Solution
The following issues are critical to the process of building
and maintaining cost-effective network infrastructure            Data center planning requires the close collaboration of
solutions for data centers:                                      business, IT, and facilities management teams to develop
                                                                 an integrated solution. Understanding some general planning
      • Uptime                                                   relationships helps you translate business requirements into
        Uptime is the key metric by which network reliability    practical data center networking solutions.
        is measured, and can be defined as the amount of
        time the network is available without interruption
        to application environments. The most common
        service interruptions to the physical layer result
        from operational changes.

      • Scalability
        When designing a data center, network designers
        must balance today’s known scalability requirements
        with tomorrow’s anticipated user demands. Traffic
        loads and bandwidth/distance requirements will
        continue to vary throughout the data center,
        which translates to a need to maximize your
        network investment.

      • Security
        A key purpose of the data center is to house
        mission critical applications in a reliable, secure
        environment. Environmental security comes in many
        forms, from blocking unauthorized access to
        monitoring system connectivity at the physical layer.
        Overall, the more secure your network is, the more
        reliable it is.




                                                                                                                              2
Business Requirements Drive Data Center Design
A sound data center planning process is inclusive of the needs of various business units. Indeed, the process requires the
close collaboration of business, IT, and facilities management teams to develop a broad yet integrated data center solution
set. Understanding some general planning relationships helps you translate business requirements into practical data center
networking solutions.

Business requirements ultimately drive all data                                              Business Stakeholders

center planning decisions (see Figure 1). On
a practical level, these requirements directly
impact the type of applications deployed and
                                                                                             Business Requirements
Service Level Agreements (SLAs) adopted by the
organization. Once critical uptime requirements
are in place and resources (servers, storage,
compute, and switches) have been specified to
                                                                                   Service Level                 Applications
support mission-critical applications, the required                                 Agreement                     Software
bandwidth, power, and cooling loads can
be estimated.                                                                                      Data Center
                                                                                                    Solution

Some data center managers try to limit the
number of standard compute resources on fewer
                                                                                                                     Active Equipment
hardware platforms and operating systems ,which              Power Requirements
                                                                                                                     (Servers, Storage)
makes planning decisions related to cabinet, row,                                         Structured Cabling

and room more predictable over time. Other
managers base their design decisions solely on                       Cooling Requirements
                                                                                                             Network Requirements
                                                                                                                  (Switching)
the business application, which presents more
of a challenge in terms of planning for future
                                                           Facilities Stakeholders                                          IT Stakeholders
growth. The data center network architecture
discussed in this guide uses a modular, End of Row
                                                                              Figure 1. Business Requirements
(EoR) model and includes all compute resources
and their network, power, and storage needs. The
resulting requirements translates to multiple LAN, SAN, and power connections at the physical layer infrastructure. These
requirements can be scaled up from cabinet to row, from row to zone, and from zones to room.



Designing Scalability into the Physical Layer
When deploying large volumes of servers inside the data                One way to simplify the design and simultaneously
center it is extremely important that the design footprint is          incorporate a scalable layout is to divide the raised floor
scalable. However, access models vary between each net-                space into modular, easily duplicated sub-areas. Figure 2
work, and can often be extremely complex to design.                    illustrates the modular building blocks used in order to
The integrated network topologies discussed in this guide              design scalability into the network architecture at both OSI
take a modular, platform-based approach in order to scale              Layers 1 and 2. The logical architecture is divided into three
up or down as required within a cabinet or room. It is                 discrete layers, and the physical infrastructure is designed
assumed that all compute resources incorporate resilient               and divided into manageable sub-areas called “Pods” This.
network, power, and storage resources. This assumption                 example shows a typical data center with two zones and 20
translates to multiple LAN, SAN, and power connections                 Pods distributed throughout the room; core and aggregation
within the physical layer infrastructure.                              layer switches are located in each zone for redundancy, and
                                                                       access layer switches are located in each Pod to support the
                                                                       compute resources within the Pod.


                                                                                                                                              3
C O LD A IS LE                                                                       DC

           Pod                                                                                                                        Zone
                                                  H O T A IS LE
                                                                                                                                      Pod

                                                                                                                                      N e tw o rk R a ck

                                                                                                                                      S e rve r R a ck

                                                                                                                                      S to ra g e R a ck



           Pod




                                                               Module 1                                                                               Module N


           Po d                      Po d




                               Figure 2. Mapping the Logical Architecture to the Cabling Infrastructure




Network Access Layer Environments
This guide describes two models for access layer switching environments – Top of Rack (TOR), and End of Row (EOR) – and
reviews the design techniques needed for the successful deployment of these configurations within an integrated physical
layer solution. When determining whether to deploy a TOR or EOR model it is important to understand the benefits and
challenges associated with each:

        • A ToR design reduces cabling congestion which enhances flexibility of network deployment and installation. Some
          trade-offs include reduced manageability and network scalability for high-density deployments due to the need to
          manage more access switches than in an EoR configuration.

        • An EoR model (also sometimes known as Middle of Row [MoR]) leverages chassis-based technology for one or
          more row of servers to enable higher densities and greater scalability throughout the data center. Large modular
          chassis such as the Cisco Nexus 7000 Series and Cisco Catalyst 6500 Series allow for greater densities and
          performance with higher reliability and redundancy. Figure 2 represents an EoR deployment with multiple Pods being
          distributed throughout the room


Note: Integrated switching configurations, in which applications reside on blade servers that have integrated switches built into each chassis, are not covered in
this guide. These designs are used only in conjunction with blade server technologies and would be deployed in a similar fashion as EoR configurations.




                                                                                                                                                                     4
Top of Rack (ToR) Model
The design characteristic of a ToR model is the inclusion of an access layer switch in each server cabinet, so the physical
layer solution must be designed to support the switching hardware and access-layer connections. One cabling
benefit of deploying access layer switches in each server cabinet is the ability to link to the aggregation layer using
long-reach small form factor fiber connectivity. The use of fiber eliminates any reach or pathway challenges presented
by copper connectivity to allow greater flexibility in selecting the physical location of network equipment.

Figure 3 shows a typical logical ToR network topology,
illustrating the various redundant links and distribution of                  Nexus
connectivity between access and aggregation switches. This                    7010
example utilizes the Cisco Nexus 7010 for the aggregation layer
and a Cisco Catalyst 4948 for the access layer. The Cisco
Catalyst 4948 provides 10GbE links routed out of the cabinet
back to the aggregation layer and 1GbE links for server access
connections within the cabinet.

Once the logical topology has been defined, the next step is      Catalyst
to map a physical layer solution directly to that topology.        4948
With a ToR model it is important to understand the number
of network connections needed for each server resource. The
basic rule governing the number of ToR connections is that any
server deployment requiring more than 48 links requires an
additional access layer switch in each cabinet to support the
higher link volume. For example, if thirty (30) 1 RU servers
that each require three copper and two fiber connections
are deployed within a 45 RU cabinet, an additional access
layer switch is needed for each cabinet. Figure 4 shows the
typical rear view ToR design including cabinet connectivity                    Figure 3. Logical ToR Network Topology
requirements at aggregation and access layers.




     Patch panel                                                                                        Patch panel
      Top of Rack                                                                                        Top of Rack
                                        Patch panel                 Patch panel
          server                                                                                            server
                                        x – connect                 x – connect




                                         Network                      Network
                                        Aggregation                  Aggregation
                                           Point                        Point
                                           A–B                          A–B
          server                                                                                            server

                                         Figure 4. Rear View of ToR Configuration


                                                                                                                              5
Density considerations are tied to the level of
redundancy deployed to support mission critical
hardware throughout the data center. It is critical
to choose a deployment strategy that
accommodates every connection and facilitates
good cable management.

                                                         SAN
High-density ToR deployments like the one shown          Connections
in Figure 5 require more than 48 connections
per cabinet. Two access switches are deployed
in each NET-ACCESS ™ Cabinet to support complete         LAN
redundancy throughout the network. All access            Connections
connections are routed within the cabinet and all
aggregation linked are routed up and out of the
cabinet through the FIBERRUNNER ® Cable Routing
System back to the horizontal distribution area.
                                                                                                                  Power
                                                                                                                  Connections
Lower-density ToR deployments require less than
48 connections per cabinet (see Figure 6). This design
shares network connections between neighbor
cabinets to provide complete redundancy to each
compute resource.
                                                                   Figure 5. Dual Switch – Server Cabinet Rear View




                                     Figure 6. Single Switch – Server Cabinet Rear View



                                                                                                                      6
The cross-over of network connections between cabinets                 lower portion of the cabinet closer to the cooling source to
presents a cabling challenge that needs to be addressed in             allow for proper thermal management of each compute
the physical design to avoid problems when performing any              resource. In this layout, 1 RU servers are specified at 24
type of operational change after initial installation. To properly     servers per cabinet, with two LAN and two SAN connections
route these connections between cabinets there must be                 per server to leverage 100% of the LAN switch ports
dedicated pathways defined between each cabinet to                     allocated to each cabinet.
accommodate the cross-over of connections. The most
common approach is to use PANDUIT overhead cable                       Connectivity is routed overhead between cabinets to
routing systems that attach directly to the top of the                 minimize congestion and allow for greater redundancy within
NET-ACCESS ™ Cabinet to provide dedicated pathways for all             the LAN and SAN environment. All fiber links from the LAN
connectivity routing between cabinets, as shown in Figure 6.           and SAN equipment are routed via overhead pathways back
                                                                       to the Cisco Nexus, Catalyst, and MDS series switches at
Table 1 itemizes the hardware needed to support a typical              the aggregation layer. The PANDUIT ® FIBERRUNNER ® Cable
ToR deployment in a data center with 16,800 square feet                Routing System supports overhead fiber cabling, and
of raised floor space. Typically in a ToR layout, the Cisco            PANDUIT ® NET-ACCESS ™ Overhead Cable Routing System
Catalyst 4948 is located towards the top of the cabinet. This          can be leveraged with horizontal cable managers to support
allows for heavier equipment such as servers to occupy the             copper routing and patching between cabinets.


Data Center Assumptions                          Quantity       Server Specifications:
Raised Floor Square Footage                        16800        2    — 2.66 GHz Intel Quad Core Xeon X5355
Servers                                            9792         2    x 670 W Hot-Swap
Cisco Nexus 7010                                     10         4    GB Ram — (2) 2048MB Dimm(s)
Cisco Catalyst 4948                                 408         2    — 73GB 15K-rpm Hot-Swap SAS — 3.5
Server Cabinets                                     408
Network Cabinets (LAN & SAN Equipment)               33
Midrange & SAN Equipment Cabinets                   124

                                             Watts Per Device    Per Cabinet     Cabs Per Pod Pods Per Room Quantity Total Watts
Servers                                            350               24              102            4        9792     3,427,200
Cisco Catalyst 4948 Switches (Access)              350                1              102            4         408      142,800
Cisco MDS (Access)                                 100                1              102            4         408       40,800
Cisco Appliance Allocation                        5,400               —               —             —          8        43,200
Cisco Nexus 7010 (Aggregation)                    5,400               —               2             4          8        43,200
Cisco MDS (Aggregation)                           5,400               —               2             4          8        43,200
Cisco Nexus 7010 (Core)                           5,400               —               —             —          2        10,800
Cisco MDS (Core)                                  5,400               —               —             —          2        10,800
Midrange/SAN Equipment Cabinets                   4,050               —               —             —         124      502,200
Midrange/SAN Switching MDS                        5,400               —               —             —          4        21,600
Midrange/SAN Switching Nexus 7010                 5,400               —               —             —          4        21,600
                                                                                                                Total Watts: 4,307,400
               Table 1. Room Requirements for Typical ToR Deployment                                         Total Kilowatts:   4,307.40
                                                                                                         Total Megawatts:         4.31



End of Row (EoR) Model
In an EoR model, server cabinets contain patch fields but not access switches. In this model, the total number of servers per
cabinet and I/Os per server determines the number of switches used in each Pod, which then drives the physical layer
design decisions. The typical EoR Pod contains two Cisco Nexus or Cisco Catalyst switches for redundancy. The length of
each row within the Pod is determined by the density of the network switching equipment as well as the distance from the
server to the switch. For example, if each server cabinet in the row utilizes 48 connections and the switch has a capacity
for 336 connections, the row would have the capacity to support up to seven server cabinets with complete network
redundancy, as long as the seven cabinets are within the maximum cable length to the switching equipment.

                                                                                                                                           7
Switch Cabinets




                                           Figure 7. Top View of EoR Configuration

Figure 7 depicts a top view of a typical Pod design for a EoR configuration, and shows proper routing of connectivity from
a server cabinet to both access switches. Network equipment is located in the middle of the row to distribute redundant
connections across two rows of cabinets to support a total of 14 server cabinets. The red line represents copper LAN “A”
connections and the blue line represents copper LAN “B” connections. For true redundancy the connectivity takes two diverse
pathways using PANDUIT ® GRIDRUNNER ™ Underfloor Cable Routing Systems to route cables underfloor to each cabinet. For
fiber connections there is a similar pathway overhead to distribute all SAN “A” and “B” connections to each cabinet.

As applications continue to put even greater demands on
the network infrastructure it is critical to have the appropriate
cabling infrastructure in place to support these increased
bandwidth and performance requirements. Each EoR-arranged
switch cabinet is optimized to handle the high density
requirements from the Cisco Nexus 7010 switch. Figure 8
depicts the front view of a Cisco Nexus 7010 switch in a
NET-ACCESS ™ Cabinet populated with Category 6A 10 Gigabit
cabling leveraged for deployment in an EoR configuration. It is
critical to properly manage all connectivity exiting the front of
each switch. The EoR-arranged server cabinet is similar to a
typical ToR-arranged cabinet (see Figure 5), but the characteristic
access layer switch for both LAN and SAN connections are
replaced with structured cabling.

Table 2 itemizes the hardware needed to support a typical EoR
deployment in a data center with 16,800 square feet of raised
floor space. The room topology for the EoR deployment is not
drastically different from the ToR model. The row size is
determined by the typical connectivity requirements for any
given row of server cabinets. Most server cabinets contain
a minimum of 24 connections and sometimes exceed
48 connections per cabinet. All EoR reference architectures
                                                                          Figure 8. PANDUIT ® NET-ACCESS™ Cabinet
are based around 48 copper cables and 24 fiber strands for
                                                                                 with Cisco Nexus Switch
each server cabinet.

                                                                                                                         8
Data Center Assumptions                        Quantity       Server Specifications:
Raised Floor Square Footage                     16800         2 — 2.66 GHz Intel Quad Core Xeon X5355
Servers                                         9792          2 x 835 W Hot-Swap
Cisco Nexus 7010                                  50          16 GB Ram — (4)4096MB Dimm(s)
Server Cabinets                                  336          2 — 146GB 15K-rpm Hot-Swap SAS — 3.5
Network Cabinets (LAN & SAN Equipment)           108
Midrange & SAN Equipment Cabinets                132

                                           Watts Per Device    Per Cabinet    Cabs Per Pod Pods Per Room Quantity Total Watts
Servers                                          450               12              14            24       9792     1,814,400
Cisco Appliance Allocation                      5,400               —              —             —          8        43,200
Cisco Nexus 7010 (Access)                       5,400               —               2            24         48      259,200
Cisco MDS (Access)                              5,400               —               2            24         48      259,200
Cisco Nexus 7010 (Core)                         5,400               —              —             —          2       10,800
Cisco MDS (Core)                                5,400               —              —             —          2        10,800
Midrange/SAN Equipment Cabinets                 4,050               —              —             —         132      502,200
Midrange/SAN Switching MDS                      5,400               —              —             —          4       21,600
Midrange/SAN Switching Nexus 7010               5,400               —              —             —          4       21,600
                                                                                                           Total Watts: 4,307,400
              Table 2. Room Requirements for Typical EoR Deployment                                     Total Kilowatts:   4,307.40
                                                                                                    Total Megawatts:         4.31




Considerations Common to ToR and EoR Configurations
PANDUIT is focused on providing high-density, flexible physical layer solutions that maximize data center space utilization
and optimize energy use. The following sections describe cabinet, cooling, and pathway considerations that are common to
all logical architectures.

Cabinets
Cabinets must be specified that allow for maximum scalability and flexibility within the data center. The vertically mounted
patch panel within the cabinet provides additional rack units that can be used to install more servers within the 45 rack units
available. These vertically mounted panels also provide superior cable management versus traditional ToR horizontal patch
panels by moving each network connection closer to the server network interface card ultimately allowing a shorter patching
distance with consistent lengths throughout the cabinet.

Figure 9 represents a typical vertical patch panel deployment
in the NET-ACCESS ™ Server Cabinet. Complete power and data
separation is achieved through the use of vertical cable
management on both sides of the cabinet frame. The cabinet
also allows for both overhead and underfloor cable routing
for different data center applications. Shorter power cords
are an option to remove the amount of added cable slack
from longer cords that are shipped with server hardware.
Using shorter patch cords and leveraging PANDUIT vertical
cable management integrated into the NET-ACCESS ™ Cabinet
alleviates potential airflow issues at the back of each server
that could result from poorly managed cabling.




                                                                             Figure 9. Cabinet Vertical Mount Patch Panel


                                                                                                                                      9
Thermal Management                                               management systems that efficiently route and organize
Data center power requirements continue to increase at high      critical IT infrastructure elements. Figure 10 represents an
rates making it difficult to plan appropriately for the proper   analysis done based upon the assumptions for the 16,800
cooling systems needed to support your room. Cabinets play       square foot EoR deployment.
a critical role in managing the high heat loads generated by
active equipment. Each cabinet will require different power      PANDUIT ® NET-ACCESS ™ Cabinets feature large pathways for
loads based upon the type of servers being installed as well     efficient cable routing and improved airflow while providing
as the workload being requested of each compute resource.        open-rack accessibility to manage, protect, and showcase
Understanding cabinet level power requirements gives             cabling and equipment. Elements such as exhaust ducting,
greater visibility into overall room conditions.                 filler panels, and the PANDUIT ® COOL BOOT ™ Raised Floor
                                                                 Assembly support hot and cold aisle separation in
PANDUIT Laboratories’ research into thermal management           accordance with the TIA-942 standard. These passive
includes advanced computational fluid dynamics (CFD)             solutions (no additional fans or compressors) contribute by
analysis to model optimal airflow patterns and above-floor       minimizing bypass air in order to manage higher heat loads
temperature distributions throughout the data center. This       in the data center and ensure proper equipment operation
data is then used to develop rack, cabinet, and cable            and uptime.




                                                                                         Designed in a Hot – Cold Architecture

                                                                                         12 – 20 Ton CRAC Units Outside

                                                                                         12 – 30 Ton CRAC Units Inside

                                                                                         Utilizing Ceiling Plenum for
                                                                                         Return Air

                                                                                         All Perforated Tiles at 25% Open

                                                                                         Peak Temp was 114°




                                       Figure 10. Data Center EoR Thermal Analysis



Pathways                                                         This strategy offers several benefits:
The variety and density of data center cables means that
there is no “one size fits all” solution when planning cable           • The combination of overhead fiber routing system
pathways. Designers usually specify a combination of                     and cabinet routing system ensures physical
pathway options. Many types and sizes are available for                  separation between the copper and fiber cables, as
designers to choose from, including wire basket, ladder rack,            recommended in the TIA-942 data center standard
J-hooks, conduit, solid metal tray, and fiber-optic cable
routing systems. Factors such as room height, equipment                • Overhead pathways such as the PANDUIT ®
cable entry holes, rack and cabinet density, and cable types,            FIBERRUNNER ® Cable Routing System protect
counts, and diameters also influence pathway decisions.                  fiber optic jumpers, ribbon interconnect cords, and
The pathway strategies developed for ToR and EoR models                  multi-fiber cables in a solid, enclosed channel that
all leverage the FIBERRUNNER ® Cable Routing System to route             provides bend radius control, and the location of the
horizontal fiber cables, and use the NET-ACCESS ™ Overhead               pathway is not disruptive to raised floor cooling
Cable Routing System in conjunction with a wire basket or
ladder rack for horizontal copper and backbone fiber cables.           • The overall visual effect is organized, sturdy,
                                                                         and impressive


                                                                                                                                 10
Conclusion
Next-generation data center hardware such as the Cisco
Nexus 7010 switch provides increased network capacity
and functionality in the data center, which in turn is
placing greater demands on the cabling infrastructure.
This guide describes the ways that PANDUIT structured
cabling solutions map easily to the logical architectures
being deployed in today’s high-performance networks
to achieve a unified physical layer infrastructure.
Network stakeholders can use modular designs for
both hardware architectures and cabling layouts to
ensure that the system will scale over the life of the
data center to survive multiple equipment refreshes
and meet aggressive uptime goals.



About PANDUIT
PANDUIT is a leading, world-class developer and
provider of innovative networking and electrical
solutions. For more than 50 years, PANDUIT has
engineered and manufactured end-to-end solutions
that assist our customers in the deployment of the
latest technologies. Our global expertise and strong
industry relationships make PANDUIT a valuable and
trusted partner dedicated to delivering technology-driven
solutions and unmatched service. Through our
commitment to innovation, quality and service,
PANDUIT creates competitive advantages to earn
customer preference.



Cisco, Cisco Systems, Nexus, Catalyst, and MDS are
registered trademarks of Cisco Technology, Inc.




                       www.panduit.com • cs@panduit.com • 800-777-3300




                                                                          ©2008 PANDUIT Corp.
                                                                         ALL RIGHTS RESERVED.
                                                                                Printed in U.S.A.
                                                                                   WW-CPCB36
                                                                                       8/2008

                                                                                              11

Más contenido relacionado

La actualidad más candente

Essential Arb 2
Essential Arb 2Essential Arb 2
Essential Arb 2hinser14
 
No sql and data scalability
No sql and data scalabilityNo sql and data scalability
No sql and data scalabilityRoger Xia
 
Data Centre Hosting Services
Data Centre Hosting ServicesData Centre Hosting Services
Data Centre Hosting Servicesscottamc26
 
High-Performance Interoperable Architecture for Information Dominance
High-Performance Interoperable Architecture for Information DominanceHigh-Performance Interoperable Architecture for Information Dominance
High-Performance Interoperable Architecture for Information DominanceReal-Time Innovations (RTI)
 
C11 support for-mobility
C11 support for-mobilityC11 support for-mobility
C11 support for-mobilityRio Nguyen
 
New Generation of Storage Tiering
New Generation of Storage TieringNew Generation of Storage Tiering
New Generation of Storage TieringTony Pearson
 
WP107 How Data Center Management Software Improves Planning and Cuts OPEX
WP107   How Data Center Management Software Improves Planning and Cuts OPEXWP107   How Data Center Management Software Improves Planning and Cuts OPEX
WP107 How Data Center Management Software Improves Planning and Cuts OPEXSE_NAM_Training
 

La actualidad más candente (11)

Best Practices for Migration
Best Practices for MigrationBest Practices for Migration
Best Practices for Migration
 
Essential Arb 2
Essential Arb 2Essential Arb 2
Essential Arb 2
 
No sql and data scalability
No sql and data scalabilityNo sql and data scalability
No sql and data scalability
 
Data Centre Hosting Services
Data Centre Hosting ServicesData Centre Hosting Services
Data Centre Hosting Services
 
High-Performance Interoperable Architecture for Information Dominance
High-Performance Interoperable Architecture for Information DominanceHigh-Performance Interoperable Architecture for Information Dominance
High-Performance Interoperable Architecture for Information Dominance
 
C11 support for-mobility
C11 support for-mobilityC11 support for-mobility
C11 support for-mobility
 
variable data publishing
variable data publishingvariable data publishing
variable data publishing
 
Sparda bank Hamburg
Sparda bank HamburgSparda bank Hamburg
Sparda bank Hamburg
 
New Generation of Storage Tiering
New Generation of Storage TieringNew Generation of Storage Tiering
New Generation of Storage Tiering
 
Database Management
Database ManagementDatabase Management
Database Management
 
WP107 How Data Center Management Software Improves Planning and Cuts OPEX
WP107   How Data Center Management Software Improves Planning and Cuts OPEXWP107   How Data Center Management Software Improves Planning and Cuts OPEX
WP107 How Data Center Management Software Improves Planning and Cuts OPEX
 

Destacado

Data Center Energy Efficiency Best Practices
Data Center Energy Efficiency Best PracticesData Center Energy Efficiency Best Practices
Data Center Energy Efficiency Best Practices42U Data Center Solutions
 
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...
Mo ict   2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...Mo ict   2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...khalid noman husainy
 
ACO Data Center Proposal
ACO Data Center ProposalACO Data Center Proposal
ACO Data Center ProposalJeff4459
 
Final Charter
Final CharterFinal Charter
Final Charterlneut03
 
Executive Presentation on adhering to Healthcare Industry compliance
Executive Presentation on adhering to Healthcare Industry complianceExecutive Presentation on adhering to Healthcare Industry compliance
Executive Presentation on adhering to Healthcare Industry complianceThomas Bronack
 
Presentation cisco data center security deep dive
Presentation   cisco data center security deep divePresentation   cisco data center security deep dive
Presentation cisco data center security deep divexKinAnx
 
Cisco Presentation
Cisco PresentationCisco Presentation
Cisco PresentationRBratton
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center supportKrunal Shah
 
Presentation dc design for small and mid-size data center
Presentation   dc design for small and mid-size data centerPresentation   dc design for small and mid-size data center
Presentation dc design for small and mid-size data centerxKinAnx
 
Cisco & VMware Products & Services as of Nov 23, 08
Cisco & VMware Products & Services as of  Nov 23, 08Cisco & VMware Products & Services as of  Nov 23, 08
Cisco & VMware Products & Services as of Nov 23, 08gueste9924aa
 
Health Center Proposal
Health Center ProposalHealth Center Proposal
Health Center Proposalso0ozz
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...Juniper Networks
 

Destacado (14)

Data Center Energy Efficiency Best Practices
Data Center Energy Efficiency Best PracticesData Center Energy Efficiency Best Practices
Data Center Energy Efficiency Best Practices
 
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...
Mo ict   2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...Mo ict   2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...
Mo ict 2013 new data center design proposal - pts-0813-004006 - ptsp-c-13-0...
 
ACO Data Center Proposal
ACO Data Center ProposalACO Data Center Proposal
ACO Data Center Proposal
 
Final Charter
Final CharterFinal Charter
Final Charter
 
Data Center Server Rack Strategies
Data Center Server Rack StrategiesData Center Server Rack Strategies
Data Center Server Rack Strategies
 
Executive Presentation on adhering to Healthcare Industry compliance
Executive Presentation on adhering to Healthcare Industry complianceExecutive Presentation on adhering to Healthcare Industry compliance
Executive Presentation on adhering to Healthcare Industry compliance
 
Presentation cisco data center security deep dive
Presentation   cisco data center security deep divePresentation   cisco data center security deep dive
Presentation cisco data center security deep dive
 
Cisco Presentation
Cisco PresentationCisco Presentation
Cisco Presentation
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center support
 
Presentation dc design for small and mid-size data center
Presentation   dc design for small and mid-size data centerPresentation   dc design for small and mid-size data center
Presentation dc design for small and mid-size data center
 
Cisco & VMware Products & Services as of Nov 23, 08
Cisco & VMware Products & Services as of  Nov 23, 08Cisco & VMware Products & Services as of  Nov 23, 08
Cisco & VMware Products & Services as of Nov 23, 08
 
Basic Proposal Writing, PTAC
Basic Proposal Writing, PTACBasic Proposal Writing, PTAC
Basic Proposal Writing, PTAC
 
Health Center Proposal
Health Center ProposalHealth Center Proposal
Health Center Proposal
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
 

Similar a Datacenter Design Guide UNIQUE Panduit-Cisco

Ebook Guide To Data Centers
Ebook Guide To Data CentersEbook Guide To Data Centers
Ebook Guide To Data Centersrobgross144
 
Advanced Design and Optimization of Data Center Interconnection Networks.pptx
Advanced Design and Optimization of Data Center Interconnection Networks.pptxAdvanced Design and Optimization of Data Center Interconnection Networks.pptx
Advanced Design and Optimization of Data Center Interconnection Networks.pptxService Solutions Pvt. Ltd. (SSL)
 
Addressing the Challenges of Tactical Information Management in Net-Centric S...
Addressing the Challenges of Tactical Information Management in Net-Centric S...Addressing the Challenges of Tactical Information Management in Net-Centric S...
Addressing the Challenges of Tactical Information Management in Net-Centric S...Angelo Corsaro
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Bladespankaj009
 
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...IBM India Smarter Computing
 
Deerns Data Center Chameleon 20110913 V1.1
Deerns Data Center Chameleon 20110913 V1.1Deerns Data Center Chameleon 20110913 V1.1
Deerns Data Center Chameleon 20110913 V1.1euroamerican
 
Data Center of the Future v1.0.pptx
Data Center of the Future v1.0.pptxData Center of the Future v1.0.pptx
Data Center of the Future v1.0.pptxjuergenJaeckel
 
router virtualization white paper
router virtualization white paperrouter virtualization white paper
router virtualization white paperPranav Dharwadkar
 
Data Centre Overview
Data Centre OverviewData Centre Overview
Data Centre OverviewLogicalis
 
Logicalis - Data Centre Overview
Logicalis - Data Centre OverviewLogicalis - Data Centre Overview
Logicalis - Data Centre OverviewLogicalis
 
Designing network topology.pptx
Designing network topology.pptxDesigning network topology.pptx
Designing network topology.pptxKISHOYIANKISH
 
Cache and consistency in nosql
Cache and consistency in nosqlCache and consistency in nosql
Cache and consistency in nosqlJoão Gabriel Lima
 
Research Paper  Find a peer reviewed article in the following d.docx
Research Paper  Find a peer reviewed article in the following d.docxResearch Paper  Find a peer reviewed article in the following d.docx
Research Paper  Find a peer reviewed article in the following d.docxeleanorg1
 

Similar a Datacenter Design Guide UNIQUE Panduit-Cisco (20)

Ebook Guide To Data Centers
Ebook Guide To Data CentersEbook Guide To Data Centers
Ebook Guide To Data Centers
 
Advanced Design and Optimization of Data Center Interconnection Networks.pptx
Advanced Design and Optimization of Data Center Interconnection Networks.pptxAdvanced Design and Optimization of Data Center Interconnection Networks.pptx
Advanced Design and Optimization of Data Center Interconnection Networks.pptx
 
Addressing the Challenges of Tactical Information Management in Net-Centric S...
Addressing the Challenges of Tactical Information Management in Net-Centric S...Addressing the Challenges of Tactical Information Management in Net-Centric S...
Addressing the Challenges of Tactical Information Management in Net-Centric S...
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Blades
 
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...
Towards an Open Data Center with an Interoperable Network (ODIN) Volume 3: So...
 
10g db grid
10g db grid10g db grid
10g db grid
 
TERM PAPER
TERM PAPERTERM PAPER
TERM PAPER
 
Deerns Data Center Chameleon 20110913 V1.1
Deerns Data Center Chameleon 20110913 V1.1Deerns Data Center Chameleon 20110913 V1.1
Deerns Data Center Chameleon 20110913 V1.1
 
122 whitepaper
122 whitepaper122 whitepaper
122 whitepaper
 
Data Center of the Future v1.0.pptx
Data Center of the Future v1.0.pptxData Center of the Future v1.0.pptx
Data Center of the Future v1.0.pptx
 
Software Defined Grid
Software Defined GridSoftware Defined Grid
Software Defined Grid
 
router virtualization white paper
router virtualization white paperrouter virtualization white paper
router virtualization white paper
 
Cloud & Data Center Networking
Cloud & Data Center NetworkingCloud & Data Center Networking
Cloud & Data Center Networking
 
Data Centre Overview
Data Centre OverviewData Centre Overview
Data Centre Overview
 
Logicalis - Data Centre Overview
Logicalis - Data Centre OverviewLogicalis - Data Centre Overview
Logicalis - Data Centre Overview
 
sdnppt.pdf
sdnppt.pdfsdnppt.pdf
sdnppt.pdf
 
Designing network topology.pptx
Designing network topology.pptxDesigning network topology.pptx
Designing network topology.pptx
 
Cache and consistency in nosql
Cache and consistency in nosqlCache and consistency in nosql
Cache and consistency in nosql
 
Research Paper  Find a peer reviewed article in the following d.docx
Research Paper  Find a peer reviewed article in the following d.docxResearch Paper  Find a peer reviewed article in the following d.docx
Research Paper  Find a peer reviewed article in the following d.docx
 
Software defined data center
Software defined data centerSoftware defined data center
Software defined data center
 

Último

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024TopCSSGallery
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkPixlogix Infotech
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructureitnewsafrica
 

Último (20)

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024Top 10 Hubspot Development Companies in 2024
Top 10 Hubspot Development Companies in 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
React Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App FrameworkReact Native vs Ionic - The Best Mobile App Framework
React Native vs Ionic - The Best Mobile App Framework
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
 

Datacenter Design Guide UNIQUE Panduit-Cisco

  • 1. Data Centers Mapping Cisco Nexus, Catalyst, and MDS Logical Architectures into PANDUIT Physical Layer Infrastructure Solutions 1
  • 2. Introduction The growing speed and footprint of data centers is challenging IT Managers to effectively budget and develop reliable, high-performance, secure, and scalable network infrastructure environments. This growth is having a direct impact on the amount of power and cooling required to support overall data center demands. Delivering reliable power and directly cooling the sources that are consuming the majority of the power can be extremely difficult if the data center is not planned correctly. This design guide examines how physical infrastructure designs can support a variety of network layer topologies in order to achieve a truly integrated physical layer infrastructure. By understanding the network architecture governing the arrangement of switches and servers throughout the data center, network stakeholders can map out a secure and scalable infrastructure design to support current applications and meet anticipated bandwidth requirements and transmission speeds. The core of this guide presents a virtual walk through the data center network architecture, outlining the relationships of key physical layer elements including switches, servers, power, thermal management, racks/cabinets, cabling media, and cable management. The deployment of Cisco hardware in two different access models are addressed: Top of Rack (ToR) and End of Row (EoR). Top of Mind Data Center Issues Developing the Integrated Infrastructure Solution The following issues are critical to the process of building and maintaining cost-effective network infrastructure Data center planning requires the close collaboration of solutions for data centers: business, IT, and facilities management teams to develop an integrated solution. Understanding some general planning • Uptime relationships helps you translate business requirements into Uptime is the key metric by which network reliability practical data center networking solutions. is measured, and can be defined as the amount of time the network is available without interruption to application environments. The most common service interruptions to the physical layer result from operational changes. • Scalability When designing a data center, network designers must balance today’s known scalability requirements with tomorrow’s anticipated user demands. Traffic loads and bandwidth/distance requirements will continue to vary throughout the data center, which translates to a need to maximize your network investment. • Security A key purpose of the data center is to house mission critical applications in a reliable, secure environment. Environmental security comes in many forms, from blocking unauthorized access to monitoring system connectivity at the physical layer. Overall, the more secure your network is, the more reliable it is. 2
  • 3. Business Requirements Drive Data Center Design A sound data center planning process is inclusive of the needs of various business units. Indeed, the process requires the close collaboration of business, IT, and facilities management teams to develop a broad yet integrated data center solution set. Understanding some general planning relationships helps you translate business requirements into practical data center networking solutions. Business requirements ultimately drive all data Business Stakeholders center planning decisions (see Figure 1). On a practical level, these requirements directly impact the type of applications deployed and Business Requirements Service Level Agreements (SLAs) adopted by the organization. Once critical uptime requirements are in place and resources (servers, storage, compute, and switches) have been specified to Service Level Applications support mission-critical applications, the required Agreement Software bandwidth, power, and cooling loads can be estimated. Data Center Solution Some data center managers try to limit the number of standard compute resources on fewer Active Equipment hardware platforms and operating systems ,which Power Requirements (Servers, Storage) makes planning decisions related to cabinet, row, Structured Cabling and room more predictable over time. Other managers base their design decisions solely on Cooling Requirements Network Requirements (Switching) the business application, which presents more of a challenge in terms of planning for future Facilities Stakeholders IT Stakeholders growth. The data center network architecture discussed in this guide uses a modular, End of Row Figure 1. Business Requirements (EoR) model and includes all compute resources and their network, power, and storage needs. The resulting requirements translates to multiple LAN, SAN, and power connections at the physical layer infrastructure. These requirements can be scaled up from cabinet to row, from row to zone, and from zones to room. Designing Scalability into the Physical Layer When deploying large volumes of servers inside the data One way to simplify the design and simultaneously center it is extremely important that the design footprint is incorporate a scalable layout is to divide the raised floor scalable. However, access models vary between each net- space into modular, easily duplicated sub-areas. Figure 2 work, and can often be extremely complex to design. illustrates the modular building blocks used in order to The integrated network topologies discussed in this guide design scalability into the network architecture at both OSI take a modular, platform-based approach in order to scale Layers 1 and 2. The logical architecture is divided into three up or down as required within a cabinet or room. It is discrete layers, and the physical infrastructure is designed assumed that all compute resources incorporate resilient and divided into manageable sub-areas called “Pods” This. network, power, and storage resources. This assumption example shows a typical data center with two zones and 20 translates to multiple LAN, SAN, and power connections Pods distributed throughout the room; core and aggregation within the physical layer infrastructure. layer switches are located in each zone for redundancy, and access layer switches are located in each Pod to support the compute resources within the Pod. 3
  • 4. C O LD A IS LE DC Pod Zone H O T A IS LE Pod N e tw o rk R a ck S e rve r R a ck S to ra g e R a ck Pod Module 1 Module N Po d Po d Figure 2. Mapping the Logical Architecture to the Cabling Infrastructure Network Access Layer Environments This guide describes two models for access layer switching environments – Top of Rack (TOR), and End of Row (EOR) – and reviews the design techniques needed for the successful deployment of these configurations within an integrated physical layer solution. When determining whether to deploy a TOR or EOR model it is important to understand the benefits and challenges associated with each: • A ToR design reduces cabling congestion which enhances flexibility of network deployment and installation. Some trade-offs include reduced manageability and network scalability for high-density deployments due to the need to manage more access switches than in an EoR configuration. • An EoR model (also sometimes known as Middle of Row [MoR]) leverages chassis-based technology for one or more row of servers to enable higher densities and greater scalability throughout the data center. Large modular chassis such as the Cisco Nexus 7000 Series and Cisco Catalyst 6500 Series allow for greater densities and performance with higher reliability and redundancy. Figure 2 represents an EoR deployment with multiple Pods being distributed throughout the room Note: Integrated switching configurations, in which applications reside on blade servers that have integrated switches built into each chassis, are not covered in this guide. These designs are used only in conjunction with blade server technologies and would be deployed in a similar fashion as EoR configurations. 4
  • 5. Top of Rack (ToR) Model The design characteristic of a ToR model is the inclusion of an access layer switch in each server cabinet, so the physical layer solution must be designed to support the switching hardware and access-layer connections. One cabling benefit of deploying access layer switches in each server cabinet is the ability to link to the aggregation layer using long-reach small form factor fiber connectivity. The use of fiber eliminates any reach or pathway challenges presented by copper connectivity to allow greater flexibility in selecting the physical location of network equipment. Figure 3 shows a typical logical ToR network topology, illustrating the various redundant links and distribution of Nexus connectivity between access and aggregation switches. This 7010 example utilizes the Cisco Nexus 7010 for the aggregation layer and a Cisco Catalyst 4948 for the access layer. The Cisco Catalyst 4948 provides 10GbE links routed out of the cabinet back to the aggregation layer and 1GbE links for server access connections within the cabinet. Once the logical topology has been defined, the next step is Catalyst to map a physical layer solution directly to that topology. 4948 With a ToR model it is important to understand the number of network connections needed for each server resource. The basic rule governing the number of ToR connections is that any server deployment requiring more than 48 links requires an additional access layer switch in each cabinet to support the higher link volume. For example, if thirty (30) 1 RU servers that each require three copper and two fiber connections are deployed within a 45 RU cabinet, an additional access layer switch is needed for each cabinet. Figure 4 shows the typical rear view ToR design including cabinet connectivity Figure 3. Logical ToR Network Topology requirements at aggregation and access layers. Patch panel Patch panel Top of Rack Top of Rack Patch panel Patch panel server server x – connect x – connect Network Network Aggregation Aggregation Point Point A–B A–B server server Figure 4. Rear View of ToR Configuration 5
  • 6. Density considerations are tied to the level of redundancy deployed to support mission critical hardware throughout the data center. It is critical to choose a deployment strategy that accommodates every connection and facilitates good cable management. SAN High-density ToR deployments like the one shown Connections in Figure 5 require more than 48 connections per cabinet. Two access switches are deployed in each NET-ACCESS ™ Cabinet to support complete LAN redundancy throughout the network. All access Connections connections are routed within the cabinet and all aggregation linked are routed up and out of the cabinet through the FIBERRUNNER ® Cable Routing System back to the horizontal distribution area. Power Connections Lower-density ToR deployments require less than 48 connections per cabinet (see Figure 6). This design shares network connections between neighbor cabinets to provide complete redundancy to each compute resource. Figure 5. Dual Switch – Server Cabinet Rear View Figure 6. Single Switch – Server Cabinet Rear View 6
  • 7. The cross-over of network connections between cabinets lower portion of the cabinet closer to the cooling source to presents a cabling challenge that needs to be addressed in allow for proper thermal management of each compute the physical design to avoid problems when performing any resource. In this layout, 1 RU servers are specified at 24 type of operational change after initial installation. To properly servers per cabinet, with two LAN and two SAN connections route these connections between cabinets there must be per server to leverage 100% of the LAN switch ports dedicated pathways defined between each cabinet to allocated to each cabinet. accommodate the cross-over of connections. The most common approach is to use PANDUIT overhead cable Connectivity is routed overhead between cabinets to routing systems that attach directly to the top of the minimize congestion and allow for greater redundancy within NET-ACCESS ™ Cabinet to provide dedicated pathways for all the LAN and SAN environment. All fiber links from the LAN connectivity routing between cabinets, as shown in Figure 6. and SAN equipment are routed via overhead pathways back to the Cisco Nexus, Catalyst, and MDS series switches at Table 1 itemizes the hardware needed to support a typical the aggregation layer. The PANDUIT ® FIBERRUNNER ® Cable ToR deployment in a data center with 16,800 square feet Routing System supports overhead fiber cabling, and of raised floor space. Typically in a ToR layout, the Cisco PANDUIT ® NET-ACCESS ™ Overhead Cable Routing System Catalyst 4948 is located towards the top of the cabinet. This can be leveraged with horizontal cable managers to support allows for heavier equipment such as servers to occupy the copper routing and patching between cabinets. Data Center Assumptions Quantity Server Specifications: Raised Floor Square Footage 16800 2 — 2.66 GHz Intel Quad Core Xeon X5355 Servers 9792 2 x 670 W Hot-Swap Cisco Nexus 7010 10 4 GB Ram — (2) 2048MB Dimm(s) Cisco Catalyst 4948 408 2 — 73GB 15K-rpm Hot-Swap SAS — 3.5 Server Cabinets 408 Network Cabinets (LAN & SAN Equipment) 33 Midrange & SAN Equipment Cabinets 124 Watts Per Device Per Cabinet Cabs Per Pod Pods Per Room Quantity Total Watts Servers 350 24 102 4 9792 3,427,200 Cisco Catalyst 4948 Switches (Access) 350 1 102 4 408 142,800 Cisco MDS (Access) 100 1 102 4 408 40,800 Cisco Appliance Allocation 5,400 — — — 8 43,200 Cisco Nexus 7010 (Aggregation) 5,400 — 2 4 8 43,200 Cisco MDS (Aggregation) 5,400 — 2 4 8 43,200 Cisco Nexus 7010 (Core) 5,400 — — — 2 10,800 Cisco MDS (Core) 5,400 — — — 2 10,800 Midrange/SAN Equipment Cabinets 4,050 — — — 124 502,200 Midrange/SAN Switching MDS 5,400 — — — 4 21,600 Midrange/SAN Switching Nexus 7010 5,400 — — — 4 21,600 Total Watts: 4,307,400 Table 1. Room Requirements for Typical ToR Deployment Total Kilowatts: 4,307.40 Total Megawatts: 4.31 End of Row (EoR) Model In an EoR model, server cabinets contain patch fields but not access switches. In this model, the total number of servers per cabinet and I/Os per server determines the number of switches used in each Pod, which then drives the physical layer design decisions. The typical EoR Pod contains two Cisco Nexus or Cisco Catalyst switches for redundancy. The length of each row within the Pod is determined by the density of the network switching equipment as well as the distance from the server to the switch. For example, if each server cabinet in the row utilizes 48 connections and the switch has a capacity for 336 connections, the row would have the capacity to support up to seven server cabinets with complete network redundancy, as long as the seven cabinets are within the maximum cable length to the switching equipment. 7
  • 8. Switch Cabinets Figure 7. Top View of EoR Configuration Figure 7 depicts a top view of a typical Pod design for a EoR configuration, and shows proper routing of connectivity from a server cabinet to both access switches. Network equipment is located in the middle of the row to distribute redundant connections across two rows of cabinets to support a total of 14 server cabinets. The red line represents copper LAN “A” connections and the blue line represents copper LAN “B” connections. For true redundancy the connectivity takes two diverse pathways using PANDUIT ® GRIDRUNNER ™ Underfloor Cable Routing Systems to route cables underfloor to each cabinet. For fiber connections there is a similar pathway overhead to distribute all SAN “A” and “B” connections to each cabinet. As applications continue to put even greater demands on the network infrastructure it is critical to have the appropriate cabling infrastructure in place to support these increased bandwidth and performance requirements. Each EoR-arranged switch cabinet is optimized to handle the high density requirements from the Cisco Nexus 7010 switch. Figure 8 depicts the front view of a Cisco Nexus 7010 switch in a NET-ACCESS ™ Cabinet populated with Category 6A 10 Gigabit cabling leveraged for deployment in an EoR configuration. It is critical to properly manage all connectivity exiting the front of each switch. The EoR-arranged server cabinet is similar to a typical ToR-arranged cabinet (see Figure 5), but the characteristic access layer switch for both LAN and SAN connections are replaced with structured cabling. Table 2 itemizes the hardware needed to support a typical EoR deployment in a data center with 16,800 square feet of raised floor space. The room topology for the EoR deployment is not drastically different from the ToR model. The row size is determined by the typical connectivity requirements for any given row of server cabinets. Most server cabinets contain a minimum of 24 connections and sometimes exceed 48 connections per cabinet. All EoR reference architectures Figure 8. PANDUIT ® NET-ACCESS™ Cabinet are based around 48 copper cables and 24 fiber strands for with Cisco Nexus Switch each server cabinet. 8
  • 9. Data Center Assumptions Quantity Server Specifications: Raised Floor Square Footage 16800 2 — 2.66 GHz Intel Quad Core Xeon X5355 Servers 9792 2 x 835 W Hot-Swap Cisco Nexus 7010 50 16 GB Ram — (4)4096MB Dimm(s) Server Cabinets 336 2 — 146GB 15K-rpm Hot-Swap SAS — 3.5 Network Cabinets (LAN & SAN Equipment) 108 Midrange & SAN Equipment Cabinets 132 Watts Per Device Per Cabinet Cabs Per Pod Pods Per Room Quantity Total Watts Servers 450 12 14 24 9792 1,814,400 Cisco Appliance Allocation 5,400 — — — 8 43,200 Cisco Nexus 7010 (Access) 5,400 — 2 24 48 259,200 Cisco MDS (Access) 5,400 — 2 24 48 259,200 Cisco Nexus 7010 (Core) 5,400 — — — 2 10,800 Cisco MDS (Core) 5,400 — — — 2 10,800 Midrange/SAN Equipment Cabinets 4,050 — — — 132 502,200 Midrange/SAN Switching MDS 5,400 — — — 4 21,600 Midrange/SAN Switching Nexus 7010 5,400 — — — 4 21,600 Total Watts: 4,307,400 Table 2. Room Requirements for Typical EoR Deployment Total Kilowatts: 4,307.40 Total Megawatts: 4.31 Considerations Common to ToR and EoR Configurations PANDUIT is focused on providing high-density, flexible physical layer solutions that maximize data center space utilization and optimize energy use. The following sections describe cabinet, cooling, and pathway considerations that are common to all logical architectures. Cabinets Cabinets must be specified that allow for maximum scalability and flexibility within the data center. The vertically mounted patch panel within the cabinet provides additional rack units that can be used to install more servers within the 45 rack units available. These vertically mounted panels also provide superior cable management versus traditional ToR horizontal patch panels by moving each network connection closer to the server network interface card ultimately allowing a shorter patching distance with consistent lengths throughout the cabinet. Figure 9 represents a typical vertical patch panel deployment in the NET-ACCESS ™ Server Cabinet. Complete power and data separation is achieved through the use of vertical cable management on both sides of the cabinet frame. The cabinet also allows for both overhead and underfloor cable routing for different data center applications. Shorter power cords are an option to remove the amount of added cable slack from longer cords that are shipped with server hardware. Using shorter patch cords and leveraging PANDUIT vertical cable management integrated into the NET-ACCESS ™ Cabinet alleviates potential airflow issues at the back of each server that could result from poorly managed cabling. Figure 9. Cabinet Vertical Mount Patch Panel 9
  • 10. Thermal Management management systems that efficiently route and organize Data center power requirements continue to increase at high critical IT infrastructure elements. Figure 10 represents an rates making it difficult to plan appropriately for the proper analysis done based upon the assumptions for the 16,800 cooling systems needed to support your room. Cabinets play square foot EoR deployment. a critical role in managing the high heat loads generated by active equipment. Each cabinet will require different power PANDUIT ® NET-ACCESS ™ Cabinets feature large pathways for loads based upon the type of servers being installed as well efficient cable routing and improved airflow while providing as the workload being requested of each compute resource. open-rack accessibility to manage, protect, and showcase Understanding cabinet level power requirements gives cabling and equipment. Elements such as exhaust ducting, greater visibility into overall room conditions. filler panels, and the PANDUIT ® COOL BOOT ™ Raised Floor Assembly support hot and cold aisle separation in PANDUIT Laboratories’ research into thermal management accordance with the TIA-942 standard. These passive includes advanced computational fluid dynamics (CFD) solutions (no additional fans or compressors) contribute by analysis to model optimal airflow patterns and above-floor minimizing bypass air in order to manage higher heat loads temperature distributions throughout the data center. This in the data center and ensure proper equipment operation data is then used to develop rack, cabinet, and cable and uptime. Designed in a Hot – Cold Architecture 12 – 20 Ton CRAC Units Outside 12 – 30 Ton CRAC Units Inside Utilizing Ceiling Plenum for Return Air All Perforated Tiles at 25% Open Peak Temp was 114° Figure 10. Data Center EoR Thermal Analysis Pathways This strategy offers several benefits: The variety and density of data center cables means that there is no “one size fits all” solution when planning cable • The combination of overhead fiber routing system pathways. Designers usually specify a combination of and cabinet routing system ensures physical pathway options. Many types and sizes are available for separation between the copper and fiber cables, as designers to choose from, including wire basket, ladder rack, recommended in the TIA-942 data center standard J-hooks, conduit, solid metal tray, and fiber-optic cable routing systems. Factors such as room height, equipment • Overhead pathways such as the PANDUIT ® cable entry holes, rack and cabinet density, and cable types, FIBERRUNNER ® Cable Routing System protect counts, and diameters also influence pathway decisions. fiber optic jumpers, ribbon interconnect cords, and The pathway strategies developed for ToR and EoR models multi-fiber cables in a solid, enclosed channel that all leverage the FIBERRUNNER ® Cable Routing System to route provides bend radius control, and the location of the horizontal fiber cables, and use the NET-ACCESS ™ Overhead pathway is not disruptive to raised floor cooling Cable Routing System in conjunction with a wire basket or ladder rack for horizontal copper and backbone fiber cables. • The overall visual effect is organized, sturdy, and impressive 10
  • 11. Conclusion Next-generation data center hardware such as the Cisco Nexus 7010 switch provides increased network capacity and functionality in the data center, which in turn is placing greater demands on the cabling infrastructure. This guide describes the ways that PANDUIT structured cabling solutions map easily to the logical architectures being deployed in today’s high-performance networks to achieve a unified physical layer infrastructure. Network stakeholders can use modular designs for both hardware architectures and cabling layouts to ensure that the system will scale over the life of the data center to survive multiple equipment refreshes and meet aggressive uptime goals. About PANDUIT PANDUIT is a leading, world-class developer and provider of innovative networking and electrical solutions. For more than 50 years, PANDUIT has engineered and manufactured end-to-end solutions that assist our customers in the deployment of the latest technologies. Our global expertise and strong industry relationships make PANDUIT a valuable and trusted partner dedicated to delivering technology-driven solutions and unmatched service. Through our commitment to innovation, quality and service, PANDUIT creates competitive advantages to earn customer preference. Cisco, Cisco Systems, Nexus, Catalyst, and MDS are registered trademarks of Cisco Technology, Inc. www.panduit.com • cs@panduit.com • 800-777-3300 ©2008 PANDUIT Corp. ALL RIGHTS RESERVED. Printed in U.S.A. WW-CPCB36 8/2008 11