SlideShare una empresa de Scribd logo
1 de 176
Descargar para leer sin conexión
THE NEW
DATA
CENTER
 FIRST EDITION
 New technologies are radically
 reshaping the data center




 TOM CLARK
Tom Clark, 1947–2010
  All too infrequently we have the true privilege of knowing a friend
  and colleague like Tom Clark. We mourn the passing of a special
 person, a man who was inspired as well as inspiring, an intelligent
   and articulate man, a sincere and gentle person with enjoyable
humor, and someone who was respected for his great achievements.
 We will always remember the endearing and rewarding experiences
   with Tom and he will be greatly missed by those who knew him.
                          Mark S. Detrick
© 2010 Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView,
NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered
trademarks, and Brocade Assurance, Brocade NET Health, Brocade One,
Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade
Communications Systems, Inc., in the United States and/or in other countries.
Other brands, products, or service names mentioned are or may be
trademarks or service marks of their respective owners.
Notice: This document is for informational purposes only and does not set
forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade
reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document
describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of
technical data contained in this document may require an export license from
the United States government.
Brocade Bookshelf Series designed by Josh Judd
The New Data Center
Written by Tom Clark
Reviewed by Brook Reams
Edited by Victoria Thomas
Design and Production by Victoria Thomas
Illustrated by Jim Heuser, David Lehmann, and Victoria Thomas

Printing History
First Edition, August 2010




iv                                                           The New Data Center
Important Notice
Use of this book constitutes consent to the following conditions. This book is
supplied “AS IS” for informational purposes only, without warranty of any kind,
expressed or implied, concerning any equipment, equipment feature, or
service offered or to be offered by Brocade. Brocade reserves the right to
make changes to this book at any time, without notice, and assumes no
responsibility for its use. This informational document describes features that
may not be currently available. Contact a Brocade sales office for information
on feature and product availability. Export of technical data contained in this
book may require an export license from the United States government.
Brocade Corporate Headquarters
San Jose, CA USA
T: +01-408-333-8000
info@brocade.com
Brocade European Headquarters
Geneva, Switzerland
T: +41-22-799-56-40
emea-info@brocade.com
Brocade Asia Pacific Headquarters
Singapore
T: +65-6538-4700
apac-info@brocade.com

Acknowledgements
I would first of all like to thank Ron Totah, Senior Director of Marketing at
Brocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers.
Ron's consistent support and encouragement for the Brocade Bookshelf
projects and Brocade TechBytes Webcast series provides sustained
momentum for getting technical information into the hands of our customers.
The real work of project management, copyediting, content generation,
assembly, publication, and promotion is done by Victoria Thomas, Technical
Marketing Manager at Brocade. Without Victoria's steadfast commitment,
none of this material would see the light of day.
I would also like to thank Brook Reams, Solution Architect for Applications
on the Integrated Marketing team, for reviewing my draft manuscript and
providing suggestions and invaluable insights on the technologies under
discussion.
Finally, a thank you to the entire Brocade team for making this a first-class
company that produces first-class products for first-class customers
worldwide.




The New Data Center                                                             v
About the Author
Tom Clark was a resident SAN evangelist for Brocade and represented
Brocade in industry associations, conducted seminars and tutorials at
conferences and trade shows, promoted Brocade storage networking
solutions, and acted as a customer liaison. A noted author and industry
advocate of storage networking technology, he was a board member of the
Storage Networking Industry Association (SNIA) and former Chair of the SNIA
Green Storage Initiative. Clark has published hundreds of articles and white
papers on storage networking and is the author of Designing Storage Area
Networks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI,
iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001),
Storage Virtualization: Technologies for Simplifying Data Storage and
Management (Addison-Wesley 2005), and Strategies for Data Protection
(Brocade Bookshelf, 2008).
Prior to joining Brocade, Clark was Director of Solutions and Technologies
for McDATA Corporation and the Director of Technical Marketing for Nishan
Systems, the innovator of storage over IP technology. As a liaison between
marketing, engineering, and customers, he focused on customer education
and defining features that ensure productive deployment of SANs. With more
than 20 years experience in the IT industry, Clark held technical marketing and
systems consulting positions with storage networking and other data
communications companies.
Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knows
that he was intelligent, quick, a voice of sanity and also sarcasm, and a
pragmatist with a great heart. He was indeed the heart of Brocade TechBytes,
a monthly Webcast he described as “a late night technical talk show,” which
was launched in November 2008 and is still part of Brocade’s Technical
Marketing program.




vi                                                            The New Data Center
Contents


Preface ....................................................................................................... xv
Chapter 1: Supply and Demand ..............................................................1
Chapter 2: Running Hot and Cold ...........................................................9
Energy, Power, and Heat ...................................................................................... 9
Environmental Parameters ................................................................................10
Rationalizing IT Equipment Distribution ............................................................11
Economizers ........................................................................................................14
Monitoring the Data Center Environment .........................................................15
Chapter 3: Doing More with Less ......................................................... 17
VMs Reborn ......................................................................................................... 17
Blade Server Architecture ..................................................................................21
Brocade Server Virtualization Solutions ...........................................................22
    Brocade High-Performance 8 Gbps HBAs .................................................23
    Brocade 8 Gbps Switch and Director Ports ..............................................24
    Brocade Virtual Machine SAN Boot ...........................................................24
    Brocade N_Port ID Virtualization for Workload Optimization ..................25
    Configuring Single Initiator/Target Zoning ................................................26
    Brocade End-to-End Quality of Service ......................................................26
    Brocade LAN and SAN Security .................................................................27
    Brocade Access Gateway for Blade Frames ..............................................28
    The Energy-Efficient Brocade DCX Backbone Platform for
    Consolidation ..............................................................................................28
    Enhanced and Secure Client Access with Brocade LAN Solutions .........29
    Brocade Industry Standard SMI-S Monitoring ..........................................29
    Brocade Professional Services ..................................................................30
FCoE and Server Virtualization ..........................................................................31
Chapter 4: Into the Pool ........................................................................ 35
Optimizing Storage Capacity Utilization in the Data Center .............................35
Building on a Storage Virtualization Foundation ..............................................39
Centralizing Storage Virtualization from the Fabric .......................................... 41
Brocade Fabric-based Storage Virtualization ...................................................43



The New Data Center                                                                                                 vii
Contents


Chapter 5: Weaving a New Data Center Fabric ................................. 45
Better Fewer but Better ......................................................................................46
Intelligent by Design ...........................................................................................48
Energy Efficient Fabrics ......................................................................................53
Safeguarding Storage Data ................................................................................55
Multi-protocol Data Center Fabrics ....................................................................58
Fabric-based Disaster Recovery ........................................................................64
Chapter 6: The New Data Center LAN ................................................. 69
A Layered Architecture ....................................................................................... 71
Consolidating Network Tiers .............................................................................. 74
Design Considerations .......................................................................................75
     Consolidate to Accommodate Growth .......................................................75
     Network Resiliency .....................................................................................76
     Network Security .........................................................................................77
     Power, Space and Cooling Efficiency .........................................................78
     Network Virtualization ................................................................................79
Application Delivery Infrastructure ....................................................................80
Chapter 7: Orchestration ....................................................................... 83
Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89
Server Adapters ..................................................................................................89
    Brocade 825/815 FC HBA .........................................................................90
    Brocade 425/415 FC HBA .........................................................................91
    Brocade FCoE CNAs ....................................................................................91
Brocade 8000 Switch and FCOE10-24 Blade ..................................................92
Access Gateway ..................................................................................................93
Brocade Management Pack ..............................................................................94
Brocade ServerIron ADX .....................................................................................95
Chapter 9: Brocade SAN Solutions ...................................................... 97
Brocade DCX Backbones (Core) ........................................................................98
Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100
    Brocade 5300 Switch ...............................................................................101
    Brocade 5100 Switch .............................................................................. 102
    Brocade 300 Switch ................................................................................ 103
    Brocade VA-40FC Switch ......................................................................... 104
Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105
Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106
Brocade Optical Transceiver Modules .............................................................107
Brocade Data Center Fabric Manager ............................................................ 108
Chapter 10: Brocade LAN Network Solutions ..................................109
Core and Aggregation ...................................................................................... 110
    Brocade NetIron MLX Series ................................................................... 110
    Brocade BigIron RX Series ...................................................................... 111




viii                                                                                   The New Data Center
Contents


Access .............................................................................................................. 112
    Brocade TurboIron 24X Switch ................................................................ 112
    Brocade FastIron CX Series ..................................................................... 113
    Brocade NetIron CES 2000 Series ......................................................... 113
    Brocade FastIron Edge X Series ............................................................. 114
Brocade IronView Network Manager .............................................................. 115
Brocade Mobility .............................................................................................. 116
Chapter 11: Brocade One ....................................................................117
Evolution not Revolution ..................................................................................117
Industry's First Converged Data Center Fabric .............................................. 119
     Ethernet Fabric ........................................................................................ 120
     Distributed Intelligence ........................................................................... 120
     Logical Chassis ........................................................................................ 121
     Dynamic Services .................................................................................... 121
The VCS Architecture ....................................................................................... 122
Appendix A: “Best Practices for Energy Efficient Storage
Operations” .............................................................................................123
Introduction ...................................................................................................... 123
Some Fundamental Considerations ............................................................... 124
Shades of Green .............................................................................................. 125
     Best Practice #1: Manage Your Data ..................................................... 126
     Best Practice #2: Select the Appropriate Storage RAID Level .............. 128
     Best Practice #3: Leverage Storage Virtualization ................................ 129
     Best Practice #4: Use Data Compression .............................................. 130
     Best Practice #5: Incorporate Data Deduplication ................................131
     Best Practice #6: File Deduplication .......................................................131
     Best Practice #7: Thin Provisioning of Storage to Servers .................... 132
     Best Practice #8: Leverage Resizeable Volumes .................................. 132
     Best Practice #9: Writeable Snapshots ................................................. 132
     Best Practice #10: Deploy Tiered Storage ............................................. 133
     Best Practice #11: Solid State Storage .................................................. 133
     Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133
     Best Practice #13: Tape Subsystems ..................................................... 134
     Best Practice #14: Fabric Design ........................................................... 134
     Best Practice #15 - File System Virtualization ....................................... 134
     Best Practice #16: Server, Fabric and Storage Virtualization .............. 135
     Best Practice #17: Flywheel UPS Technology ........................................ 135
     Best Practice #18: Data Center Air Conditioning Improvements ......... 136
     Best Practice #19: Increased Data Center temperatures .................... 136
     Best Practice #20: Work with Your Regional Utilities .............................137
What the SNIA is Doing About Data Center Energy Usage .............................137
About the SNIA ................................................................................................. 138
Appendix B: Online Sources .................................................................139
Glossary ..................................................................................................141
Index ........................................................................................................153

The New Data Center                                                                                                   ix
Contents




x          The New Data Center
Figures


Figure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3
Figure 2. The support infrastructure adds substantial cost and energy over-
head to the data center. ...................................................................................... 4
Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11
Figure 4. Variable speed fans enable more efficient distribution of cooling. 12
Figure 5. The concept of work cell incorporates both equipment power draw
and requisite cooling. .........................................................................................13
Figure 6. An economizer uses the lower ambient temperature of outside air to
provide cooling. ...................................................................................................14
Figure 7. A native or Type 1 hypervisor. ...........................................................18
Figure 8. A hosted or Type 2 hypervisor. ..........................................................19
Figure 9. A blade server architecture centralizes shared resources while reduc-
ing individual blade server elements. ...............................................................21
Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag-
gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23
Figure 11. SAN boot centralizes management of boot images and facilitates
migration of virtual machines between hosts. .................................................25
Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to
the storage port across the fabric. ....................................................................26
Figure 13. Brocade SecureIron switches provide firewall traffic management
and LAN security for client access to virtual server clusters. ..........................27
Figure 14. The Brocade Encryption Switch provides high-performance data en-
cryption to safeguard data written to disk or tape. ..........................................27
Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3
switching in three compact, energy-efficient form factors. .............................29
Figure 16. FCoE simplifies the server cable plant by reducing the number of
network interfaces required for client, peer-to-peer, and storage access. ....31
Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channel
ports and provides protocol conversion to the data center SAN. ...................32



The New Data Center                                                                                                xi
Figures


Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil-
itate a compact, high-performance FCoE deployment. ....................................33
Figure 19. Conventional storage configurations often result in over- and under-
utilization of storage capacity across multiple storage arrays. .......................36
Figure 20. Storage virtualization aggregates the total storage capacity of mul-
tiple physical arrays into a single virtual pool. ..................................................37
Figure 21. The virtualization abstraction layer provides virtual targets to real
hosts and virtual hosts to real targets. .............................................................38
Figure 22. Leveraging classes of storage to align data storage to the business
value of data over time. .....................................................................................40
Figure 23. FAIS splits the control and data paths for more efficient execution
of metadata mapping between virtual storage and servers. ..........................42
Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada-
ta map execution for non-disruptive storage pooling, mirroring and data migra-
tion. ......................................................................................................................43
Figure 25. A storage-centric core/edge topology provides flexibility in deploying
servers and storage assets while accommodating growth over time. ............47
Figure 26. Brocade QoS gives preferential treatment to high-value applications
through the fabric to ensure reliable delivery. ..................................................49
Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges-
tion by throttling the transmission rate of the offending initiator. ..................50
Figure 28. Preferred paths are established through traffic isolation zones,
which enforce separation of traffic through the fabric based on designated
applications. ........................................................................................................51
Figure 29. By monitoring traffic activity on each port, Top Talkers can identify
which applications would most benefit from Adaptive Networking services. 52
Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port
compared to the competition. ...........................................................................54
Figure 31. The Brocade Encryption Switch provides secure encryption for disk
or tape. ................................................................................................................56
Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58
Figure 33. Integrating formerly standalone mid-tier servers into the data center
fabric with an iSCSI blade in the Brocade DCX. ...............................................61
Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric-
wide disruptions. ................................................................................................62
Figure 35. IR facilitates resource sharing between physically independent
SANs. ...................................................................................................................64
Figure 36. Long-distance connectivity options using Brocade devices. ........67
Figure 37. Access, aggregation, and core layers in the data center
network. ...............................................................................................................71
Figure 38. Access layer switch placement is determined by availability, port
density, and cable strategy. ...............................................................................73

xii                                                                                            The New Data Center
Figures


Figure 39. A Brocade BigIron RX Series switch consolidates connectivity in a
more energy efficient footprint. .........................................................................75
Figure 40. Network infrastructure typically contributes only 10% to 15% of total
data center IT equipment power usage. ...........................................................79
Figure 41. Application congestion (traffic shown as a dashed line) on a Web-
based enterprise application infrastructure. ....................................................80
Figure 42. Application workload balancing, protocol processing offload and se-
curity via the Brocade ServerIron ADX. .............................................................81
Figure 43. Open systems-based orchestration between virtualization
domains. ..............................................................................................................84
Figure 44. Brocade Management Pack for Microsoft Service Center Virtual
Machine Manager leverages APIs between the SAN and SCVMM to trigger VM
migration. ............................................................................................................86
Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90
Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91
Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to-
PCIe CNA. ............................................................................................................92
Figure 48. Brocade 8000 Switch. ....................................................................92
Figure 49. Brocade FCOE10-24 Blade. ............................................................93
Figure 50. SAN Call Home events displayed in the Microsoft System Center
Operations Center interface. .............................................................................94
Figure 51. Brocade ServerIron ADX 1000. ......................................................95
Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98
Figure 53. Brocade 5300 Switch. ................................................................. 101
Figure 54. Brocade 5100 Switch. ................................................................. 102
Figure 55. Brocade 300 Switch. .................................................................... 103
Figure 56. Brocade VA-40FC Switch. ............................................................ 104
Figure 57. Brocade Encryption Switch. ......................................................... 105
Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105
Figure 59. Brocade 7800 Extension Switch. ................................................ 106
Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107
Figure 61. Brocade DCFM main window showing the topology view. ......... 108
Figure 62. Brocade NetIron MLX-4. ............................................................... 110
Figure 63. Brocade BigIron RX-16. ................................................................ 111
Figure 64. Brocade TurboIron 24X Switch. ................................................... 112
Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113
Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura-
tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114
Figure 67. Brocade FastIron Edge X 624. ..................................................... 114


The New Data Center                                                                                                  xiii
Figures


Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager
(bottom). ........................................................................................................... 115
Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118
Figure 70. A Brocade VCS reference network architecture. ........................ 122




xiv                                                                                        The New Data Center
Preface


Data center administrators today are facing unprecedented chal-
lenges. Business applications are shifting from conventional client/
server relationships to Web-based applications, data center real
estate is at a premium, energy costs continue to escalate, new regula-
tions are imposing more rigorous requirements for data protection and
security, and tighter corporate budgets are making it difficult to
accommodate client demands for more applications and data storage.
Since all major enterprises run their businesses on the basis of digital
information, the consequences of inadequate processing power, stor-
age, network accessibility, or data availability can have a profound
impact on the viability of the enterprise itself.
At the same time, new technologies that promise to alleviate some of
these issues require both capital expenditures and a sharp learning
curve to successfully integrate new solutions that can increase produc-
tivity and lower ongoing operational costs. The ability to quickly adapt
new technologies to new problems is essential for creating a more flex-
ible data center strategy that can meet both current and future
requirements. This effort necessitates cooperation between both data
center administrators and vendors and between the multiple vendors
responsible for providing the elements that compose a comprehensive
data center solution.
The much overused term “ecosystem” is nonetheless an accurate
description of the interdependencies of technologies required for
twenty-first century data center operation. No single vendor manufac-
tures the full spectrum of hardware and software elements required to
drive data center IT processing. This is especially true when each of
the three major domains of IT operations -server, storage, and net-
working-are each undergoing profound technical evolution in the form
of virtualization. Not only must products be designed and tested for




The New Data Center                                                   xv
standards compliance and multi-vendor operability, but management
between the domains must be orchestrated to ensure stable opera-
tions and coordination of tasks.
Brocade has a long and proven track record in data center network
innovation and collaboration with partners to create new solutions to
solve real problems and at the same time reducing deployment and
operational costs. This book provides an overview of the new technolo-
gies that are radically transforming the data center into a more cost-
effective corporate asset and the specific Brocade products that can
help you achieve this goal.
The book is organized as follows:
•     “Chapter 1: Supply and Demand” starting on page 1 examines the
      technological and business drivers that are forcing changes in the
      conventional data center paradigm. Due to increased business
      demands (even in difficult economic times), data centers are run-
      ning out of space and power and this in turn is driving new
      initiatives for server, storage and network consolidation.
•     “Chapter 2: Running Hot and Cold” starting on page 9 looks at
      data center power and cooling issues that threaten productivity
      and operational budgets. New technologies such as wet and dry-
      side economizers, hot aisle/cold aisle rack deployment, and
      proper sizing of the cooling plant can help maximize productive
      use of existing real estate and reduce energy overhead.
•     “Chapter 3: Doing More with Less” starting on page 17 provides
      an overview of server virtualization and blade server technology.
      Server virtualization, in particular, is moving from secondary to pri-
      mary applications and requires coordination with upstream
      networking and downstream storage for successful implementa-
      tion. Brocade has developed a suite of new technologies to
      leverage the benefits of server virtualization and coordinate oper-
      ation between virtual machine managers and the LAN and SAN
      networks.
•     “Chapter 4: Into the Pool” starting on page 35 reviews the poten-
      tial benefits of storage virtualization for maximizing utilization of
      storage assets and automating life cycle management.




xvi                                                      The New Data Center
•   “Chapter 5: Weaving a New Data Center Fabric” starting on
    page 45 examines the recent developments in storage networking
    technology, including higher bandwidth, fabric virtualization,
    enhanced security, and SAN extension. Brocade continues to pio-
    neer more productive solutions for SANs and is the author or co-
    author of the significant standards underlying these new
    technologies.
•   “Chapter 6: The New Data Center LAN” starting on page 69 high-
    lights the new challenges that virtualization and Web-based
    applications present to the data communications network. Prod-
    ucts like the Brocade ServerIron ADX Series of application delivery
    controller provide more intelligence in the network to offload
    server protocol processing and provide much higher levels of avail-
    ability and security.
•   “Chapter 7: Orchestration” starting on page 83 focuses on the
    importance of standards-based coordination between server, stor-
    age and network domains so that management frameworks can
    provide a comprehensive view of the entire infrastructure and pro-
    actively address potential bottlenecks.
•   Chapters 8, 9, and 10 provide brief descriptions of Brocade prod-
    ucts and technologies that have been developed to solve data
    center problems.
•   “Chapter 11: Brocade One” starting on page 117 described a new
    Brocade direction and innovative technologies to simplify the com-
    plexity of virtualized data centers.
•   “Appendix A: “Best Practices for Energy Efficient Storage Opera-
    tions”” starting on page 123 is a reprint of an article written by
    Tom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green Storage
    Initiative (GSI).
•   “Appendix B: Online Sources” starting on page 139 is a list of
    online resources.
•   The “Glossary” starting on page 141 is a list of data center net-
    work terms and definitions.




The New Data Center                                                 xvii
xviii   The New Data Center
Supply and Demand
                                                              1
The collapse of the old data center paradigm



As in other social and economic sectors, information technology has
recently found itself in the awkward position of having lived beyond its
means. The seemingly endless supply of affordable real estate, elec-
tricity, data processing equipment, and technical personnel enabled
companies to build large data centers to house their mainframe and
open systems infrastructures and to support the diversity of business
applications typical of modern enterprises. In the new millennium,
however, real estate has become prohibitively expensive, the cost of
energy has skyrocketed, utilities are often incapable of increasing sup-
ply to existing facilities, data processing technology has become more
complex, and the pool of technical talent to support new technologies
is shrinking.
At the same time, the increasing dependence of companies and insti-
tutions on electronic information and communications has resulted in
a geometric increase in the amount of data that must be managed
and stored. Since 2000, the amount of corporate data generated
worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300
exabytes, with projections of about 1 zetabyte (1000 exabytes) by
2010. This data must be stored somewhere. The installation of more
servers and disk arrays to accommodate data growth is simply not sus-
tainable as data centers run out of floor space, cooling capacity, and
energy to feed additional hardware. The demands constantly placed
on IT administrators to expand support for new applications and data
are now in direct conflict with the supply of data center space and
power.
Gartner predicted that by 2009, half of the world's data centers will
not have sufficient power to support their applications. An Emerson
Power survey projects that 96% of all data centers will not have suffi-
cient power by 2011.

The New Data Center                                                   1
Chapter 1: Supply and Demand


The conventional approach to data center design and operations has
endured beyond its usefulness primarily due to a departmental silo
effect common to many business operations. A data center adminis-
trator, for example, could specify the near-term requirements for power
distribution for IT equipment but because the utility bill was often paid
for by the company's facilities management, the administrator would
be unaware of continually increasing utility costs. Likewise, individual
business units might deploy new rich content applications resulting in
a sudden spike in storage requirements and additional load placed on
the messaging network, with no proactive notification of the data cen-
ter and network operators.
In addition, the technical evolution of data center design, cooling tech-
nology, and power distribution has lagged far behind the rapid
development of server platforms, networks, storage technology, and
applications. Twenty-first century technology now resides in twentieth
century facilities that are proving too inflexible to meet the needs of
the new data processing paradigm. Consequently, many IT managers
are looking for ways to align the data center infrastructure to the new
realities of space, power, and budget constraints.
Although data centers have existed for over 50 years, guidelines for
data center design were not codified into standards until 2005. The
ANSI/TIA-942 Telecommunications Infrastructure Standard for Data
Centers focuses primarily on cable plant design but also includes
power distribution, cooling, and facilities layout. TIA-942 defines four
basic tiers for data center classification, characterized chiefly by the
degree of availability each provides:
•   Tier 1. Basic data center with no redundancy
•   Tier 2. Redundant components but single distribution path
•   Tier 3. Concurrently maintainable with multiple distribution paths
    and one active
•   Tier 4. Fault tolerant with multiple active distribution paths
 A Tier 4 data center is obviously the most expensive to build and main-
tain but fault tolerance is now essential for most data center
implementations. Loss of data access is loss of business and few com-
panies can afford to risk unplanned outages that disrupt customers
and revenue streams. A “five-nines” (99.999%) availability that allows
for only 5.26 minutes of data center downtime annually requires
redundant electrical, UPS, mechanical, and generator systems. Dupli-
cation of power and cooling sources, cabling, network ports, and
storage, however, both doubles the cost of the data center infrastruc-

2                                                      The New Data Center
ture and the recurring monthly cost of energy. Without new means to
reduce the amount of space, cooling, and power while maintaining
high data availability, the classic data center architecture is not
sustainable.

                                                     Entrance Room
         Offices                       Carriers      Carrier Equipment         Carriers
    Operations Center                                and Demarcations
        Support

              Horizontal
               cabling                                          Backbone cabling           COMPUTER ROOM
                           Backbone
      Telecom room          cabling                        Main
    Office & Operations                             Distribution Area
    Center LAN Switches                    Routers, backbone LAN/SAN/KVM Switches
                                                        PBX, M13 Muxes


        Horizontal                                            Backbone
     Distribution Area                                         cabling
   LAN/SAN/KVM Switches

                                              Horizontal                                       Horizontal
            Zone                           Distribution Area                                Distribution Area
                                        LAN/SAN/KVM Switches                              LAN/SAN/KVM Switches
     Distribution Area
                              Horizontal cabling

        Equipment                             Equipment                                        Equipment
     Distribution Area                     Distribution Area                                Distribution Area
      Rack / Cabinets                       Rack / Cabinets                                  Rack / Cabinets




Figure 1. The ANSI/TIA-942 standard functional area connectivity.

As shown in Figure 1, the TIA-942 standard defines the main func-
tional areas and interconnecting cable plant for the data center.
Horizontal distribution is typically subfloor for older raised-floor data
centers or ceiling rack drop for newer facilities. The definition of pri-
mary functional areas is meant to rationalize the cable plant and
equipment placement so that space is used more efficiently and ongo-
ing maintenance and troubleshooting can be minimized. As part of the
mainframe legacy, many older data centers are victims of indiscrimi-
nant cable runs, often strung reactively in response to an immediate
need. The subfloors of older data centers can be clogged with aban-
doned bus and tag cables, which are simply too long and too tangled
to remove. This impedes airflow and makes it difficult to accommo-
date new cable requirements.
Note that the overview in Figure 1 does not depict the additional data
center infrastructure required for UPS systems (primarily battery
rooms), cooling plant, humidifiers, backup generators, fire suppres-
sion equipment, and other facilities support systems. Although the
support infrastructure represents a significant part of the data center
investment, it is often over-provisioned for the actual operational
power and cooling requirements of IT equipment. Even though it may

The New Data Center                                                                                              3
Chapter 1: Supply and Demand


be done in anticipation of future growth, over-provisioning is now a lux-
ury that few data centers can afford. Properly sizing the computer
room air conditioning (CRAC) to the proven cooling requirement is one
of the first steps in getting data center power costs under control.

                                                          Entrance Room
           Offices                        Carriers        Carrier Equipment       Carriers
      Operations Center                                   and Demarcations
          Support

                 Horizontal
                  cabling                                           Backbone cabling     COMPUTER ROOM           UPS
                              Backbone
      Telecom room             cabling                           Main
                                                                                                               Battery
    Office & Operations                                   Distribution Area
                                               Routers, backbone LAN/SAN/KVM Switches                          Room
    Center LAN Switches
                                                            PBX, M13 Muxes


          Horizontal
                                                                 Backbone
       Distribution Area                                          cabling                                      Backup
    LAN/SAN/KVM Switches                                                                                      Generators
                                                Horizontal                                  Horizontal
              Zone                           Distribution Area                           Distribution Area
                                          LAN/SAN/KVM Switches                         LAN/SAN/KVM Switches
       Distribution Area
                                   Horizontal cabling
                                                                                                               Diesel
          Equipment                             Equipment                                   Equipment           Fuel
       Distribution Area                     Distribution Area                           Distribution Area    Reserves
        Rack / Cabinets                        Rack / Cabinets                            Rack / Cabinets


                                      Power Distribution

                                                                                                               Cooling
       Fire Suppression                                  Computer Room                            CRAC         Towers
            System                                   Air Conditioners (CRAC)                     Conduits



Figure 2. The support infrastructure adds substantial cost and energy
overhead to the data center.

The diagram in Figure 2 shows the basic functional areas for IT pro-
cessing supplemented by the key data center support systems
required for high availability data access. Each unit of powered equip-
ment has a multiplier effect on total energy draw. First, each data
center element consumes electricity according to its specific load
requirements, typically on a 7x24 basis. Second, each unit dissipates
heat as a natural by-product of its operation, and heat removal and
cooling requires additional energy draw in the form of the computer
room air conditioning system. The CRAC system itself generates heat,
which also requires cooling. Depending on the design, the CRAC sys-
tem may require auxiliary equipment such as cooling towers, pumps,
and so on, which draw additional power. Because electronic equip-
ment is sensitive to ambient humidity, each element also places an
additional load on the humidity control system. And finally, each ele-



4                                                                                                  The New Data Center
ment requires UPS support for continuous operation in the event of a
power failure. Even in standby mode, the UPS draws power for monitor-
ing controls, charging batteries, and fly-wheel operation.
Air conditioning and air flow systems typically represent about 37% of
a data center's power bill. Although these systems are essential for IT
operations, they are often over-provisioned in older data centers and
the original air flow strategy may not work efficiently for rack-mount
open systems infrastructure. For an operational data center, however,
retrofitting or redesigning air conditioning and flow during production
may not be feasible.
For large data centers in particular, the steady accumulation of more
servers, network infrastructure, and storage elements and their
accompanying impact on space, cooling, and energy capabilities high-
lights the shortcomings of conventional data center design. Additional
space simply may not be available, the air flow inadequate for suffi-
cient cooling, and utility-supplied power already at their maximum. And
yet the escalating requirements for more applications, more data stor-
age, faster performance, and higher availability continue unabated.
Resolving this contradiction between supply and demand requires
much closer attention to both the IT infrastructure and the data center
architecture as elements of a common ecosystem.

As long as energy was relatively inexpensive, companies tended to
simply buy additional floor space and cooling to deal with increasing IT
processing demands. Little attention was paid to the efficiency of elec-
trical distribution systems or the IT equipment they serviced. With
energy now at a premium, maximizing utilization of available power by
increasing energy efficiency is essential.
Industry organizations have developed new metrics for calculating the
energy efficiency of data centers and providing guidance for data cen-
ter design and operations. The Uptime Institute, for example, has
formulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) to
analyze the relationship between total power supplied to the data cen-
ter and the power that is supplied specifically to operate IT equipment.
The total facilities power input divided by the IT equipment power draw
highlights the energy losses due to power conversion, heating/cooling,
inefficient hardware, and other contributors. A SI-EER of 2 would indi-
cate that for every 2 watts of energy input at the data center meter,
only 1 watt is drives IT equipment. By the Uptime Institute's own mem-
ber surveys, a SI-EER of 2.5 is not uncommon.



The New Data Center                                                   5
Chapter 1: Supply and Demand


Likewise, The Green Grid, a global consortium of IT companies and
professionals seeking to improve energy efficiency in data centers and
business computing ecosystems, has proposed a Data Center Infra-
structure Efficiency (DCiE) ratio that divides the IT equipment power
draw by the total data center facility power. This is essentially the recip-
rocal of SI-EER, yielding a fractional ratio between the facilities power
supplied and the actual power draw for IT processing. With DCiE or SI-
EER, however, it is not possible to achieve a 1:1 ratio that would
enable every watt supplied to the data center to be productively used
for IT processing. Cooling, air flow, humidity control, fire suppression,
power distribution losses, backup power, lighting, and other factors
inevitably consume power. These supporting elements, however, can
be managed so that productive utilization of facilities power is
increased and IT processing itself is made more efficient via new tech-
nologies and better product design.
Although SI-EER and DCiE are useful tools for a top-down analysis of
data center efficiency, it is difficult to support these high-level metrics
with real substantiating data. It is not sufficient, for example, to simply
use the manufacturer's stated power figures for specific equipment,
especially since manufacturer power ratings are often based on pro-
jected peak usage and not normal operations. In addition, stated
ratings cannot account for hidden inefficiencies (for example, failure to
use blanking panels in 19" racks) that periodically increase the overall
power draw depending on ambient conditions. The alternative is to
meter major data center components to establish baselines of opera-
tional power consumption. Although it may be feasible to design in
metering for a new data center deployment, it is more difficult for exist-
ing environments. The ideal solution is for facilities and IT equipment
to have embedded power metering capability that can be solicited via
network management frameworks.




6                                                        The New Data Center
High-level SI-EER and DCiE metrics focus on data center energy effi-
ciency to power IT equipment. Unfortunately, this does not provide
information on the energy efficiency or productivity of the IT equipment
itself. Suppose that there were two data centers with equivalent IT pro-
ductivity, the one drawing 50 megawatts of power to drive 25
megawatts of IT equipment would have the same DCiE as a data cen-
ter drawing 10 megawatts to drive 5 megawatts of IT equipment. The
IT equipment energy efficiency delta could be due to a number of dif-
ferent technology choices, including server virtualization, more
efficient power supplies and hardware design, data deduplication,
tiered storage, storage virtualization, or other elements. The practical
usefulness of high-level metrics is therefore dependent on underlying
opportunities to increase energy efficiency in individual products and
IT systems. Having a tighter ratio between facilities power input and IT
output is good, but lowering the overall input number is much better.
Data center energy efficiency has external implications as well. Cur-
rently, data centers in the US alone require the equivalent of more
than 6 x 1000 megawatt power plants at a cost of approximately $3B
annually. Although that represents less than 2% of US power consump-
tion, it is still a significant and growing number. Global data center
power usage is more than twice the US figure. Given that all modern
commerce and information exchange is based ultimately on digitized
data, the social cost in terms of energy consumption for IT processing
is relatively modest. In addition, the spread of digital information and
commerce has already provided environmentally friendly benefits in
terms of electronic transactions for banking and finance, e-commerce
for both retail and wholesale channels, remote online employment,
electronic information retrieval, and other systems that have increased
productivity and reduced the requirement for brick-and-mortar onsite
commercial transactions.
Data center managers, however, have little opportunity to bask in the
glow of external efficiencies especially when energy costs continue to
climb and energy sourcing becomes problematic. Although $3B may
be a bargain for modern US society as a whole, achieving higher levels
of data center efficiency is now a prerequisite for meeting the contin-
ued expansion of IT processing requirements. More applications and
more data means either more hardware and energy draw or the adop-
tion of new data center technologies and practices that can achieve
much more with far less.




The New Data Center                                                   7
Chapter 1: Supply and Demand


What differentiates the new data center architecture from the old may
not be obvious at first glance. There are, after all, still endless racks of
blinking lights, cabling, network infrastructure, storage arrays, and
other familiar systems and a certain chill in the air. The differences are
found in the types of technologies deployed and the real estate
required to house them.
As we will see in subsequent chapters, the new data center is an
increasingly virtualized environment. The static relationships between
clients, applications, and data characteristic of conventional IT pro-
cessing are being replaced with more flexible and mobile relationships
that enables IT resources to be dynamically allocated when and where
they are needed most. The enabling infrastructure in the form of vir-
tual servers, virtual fabrics, and virtual storage has the added benefit
of reducing the physical footprint of IT and its accompanying energy
consumption. The new data center architecture thus reconciles the
conflict between supply and demand by requiring less energy while
supplying higher levels of IT productivity.




8                                                        The New Data Center
Running Hot and Cold
                                                              2
Taking the heat



Dissipating the heat generated by IT equipment is a persistent prob-
lem for data center operations. Cooling systems alone can account for
one third to one half of data center energy consumption. Over-provi-
sioning the thermal plant to accommodate current and future
requirements leads to higher operational costs. Under-provisioning the
thermal plant to reduce costs can negatively impact IT equipment,
increase the risk of equipment outages, and disrupt ongoing business
operations. Resolving heat generation issues therefore requires a
multi-pronged approach to address (1) the source of heat from IT
equipment, (2) the amount and type of cooling plant infrastructure
required, and (3) the efficiency of air flow around equipment on the
data center floor to remove heat.

Energy, Power, and Heat
In common usage, energy is the capacity of a physical system to do
work and is expressed in standardized units of joules (the work done
by a force of one newton moving one meter along the line of direction
of the force). Power, by contrast, is the rate at which energy is
expended over time, with one watt of power equal to one joule of
energy per second. The power of a 100-watt light bulb, for example, is
equivalent to 100 joules of energy per second, and the amount of
energy consumed by the bulb over an hour would be 6000 joules.
Because electrical systems often consume thousands of watts, the
amount of energy consumed is expressed in kilowatt hours (kWh), and
in fact the kilowatt hour is the preferred unit used by power companies
for billing purposes. A system that requires 10,000 watts of power
would thus consume and be billed for 10 kWh of energy for each hour
of operation, or 240 kWh per day, or 87,600 kWh per year. The typical
American household consumes 10,656 kWh per year.


The New Data Center                                                  9
Chapter 2: Running Hot and Cold


Medium and large IT hardware products are typically in the 1000+
watt range. Fibre Channel directors, for example, can be as efficient as
1300 watts (Brocade) to more than 3000 watts (competition). A large
storage array can be in the 6400 watt range. Although low-end servers
may be rated at ~200 watts, higher-end enterprise servers can be as
much as 8000 watts. With the high population of servers and the req-
uisite storage infrastructure to support them in the data center, plus
the typical 2x factor for the cooling plant energy draw, it is not difficult
to understand why data center power bills keep escalating. According
to the Environmental Protection Agency (EPA), data centers in the US
collectively consume the energy equivalent of approximately 6 million
households, or about 61 billion kWh per year.
Energy consumption generates heat. While energy consumption is
expressed in watts, heat dissipation is expressed in BTU (British Ther-
mal Units) per hour (h). One watt is approximately 3.4 BTU/h. Because
BTUs quickly add up to tens or hundreds of thousands per hour in
complex systems, heat can also be expressed in therms, with one
therm equal to 100,000 BTU. Your household heating bill, for example,
is often listed as therms averaged per day or billing period.

Environmental Parameters
Because data centers are closed environments, ambient temperature
and humidity must also be considered. ASHRAE Thermal Guidelines
for Data Processing Environments provides best practices for main-
taining proper ambient conditions for operating IT equipment within
data centers. Data centers typically run fairly cool at about 68 degrees
Fahrenheit and 50% relative humidity. While legacy mainframe sys-
tems did require considerable cooling to remain within operational
norms, open systems IT equipment is less demanding. Consequently,
there has been a more recent trend to run data centers at higher
ambient temperatures, sometimes disturbingly referred to as
“Speedo” mode data center operation. Although ASHRAE's guidelines
present fairly broad allowable ranges of operation (50 to 90 degrees,
20 to 80% relative humidity), recommended ranges are still somewhat
narrow (68 to 77 degrees, 40 to 55% relative humidity).




10                                                       The New Data Center
Rationalizing IT Equipment Distribution


Rationalizing IT Equipment Distribution
Servers and network equipment are typically configured in standard
19" (wide) racks and rack enclosures, in turn, are arranged for accessi-
bility for cabling and servicing. Increasingly, however, the floor plan for
data center equipment distribution must also accommodate air flow
for equipment cooling. This requires that individual units be mounted
in a rack for consistent air flow direction (all exhaust to the rear or all
exhaust to the front) and that the rows of racks be arranged to exhaust
into a common space, called a hot aisle/cold aisle plan, as shown in
Figure 3.

    Cold aisle


Equipment row


     Hot aisle


Equipment row                                                            Air flow


    Cold aisle


Equipment row


     Hot aisle


Figure 3. Hot aisle/cold aisle equipment floor plan.

A hot aisle/cold aisle floor plan provides greater cooling efficiency by
directing cold to hot air flow for each equipment row into a common
aisle. Each cold aisle feeds cool air for two equipment rows while each
hot aisle allows exhaust for two equipment rows, thus enabling maxi-
mum benefit for the hot/cold circulation infrastructure. Even greater
efficiency is achieved by deploying equipment with variable-speed
fans.




The New Data Center                                                           11
Chapter 2: Running Hot and Cold




                                                                                More
                                                                                even
                                                                               cooling

                                    Equipment
                                    at bottom
                                     is cooler


     Server rack with constant speed fans        Server rack with variable speed fans


Figure 4. Variable speed fans enable more efficient distribution of
cooling.

Variable speed fans increase or decrease their spin rate in response to
changes in equipment temperature. As shown in Figure 4, cold air flow
into equipment racks with constant speed fans favors the hardware
mounted in the lower equipment slots and thus nearer to the cold air
feed. Equipment mounted in the upper slots is heated by their own
power draw as well as the heat exhaust from the lower tiers. Use of
variable speed fans, by contrast, enables each unit to selectively apply
cooling as needed, with more even utilization of cooling throughout the
equipment rack.
Research done by Michael Patterson and Annabelle Pratt of Intel lever-
ages the hot aisle/cold aisle floor plan approach to create a metric for
measuring energy consumption of IT equipment. By convention, the
energy consumption of a unit of IT hardware can be measured physi-
cally via use of metering equipment or approximated via use of the
manufacturer's stated power rating (in watts or BTUs).
As shown in Figure 5 Patterson and Pratt incorporate both the energy
draw of the equipment mounted within a rack and the associated hot
aisle/cold aisle real estate required to cool the entire rack. This “work
cell” u nit thus provides a more accurate description of what is actually
required to power and cool IT equipment and, supposing the equip-
ment (for example, servers) is uniform across a row, provides a useful
multiplier for calculating total energy consumption of an entire row of
mounted hardware.




12                                                               The New Data Center
Rationalizing IT Equipment Distribution


                                          Work cell



            Cold aisle




      Equipment racks




             Hot aisle




Figure 5. The concept of work cell incorporates both equipment power
draw and requisite cooling.

When energy was plentiful and cheap, it was often easy to overlook the
basic best practices for data center hardware deployment and the sim-
ple remedies to correct inefficient air flow. Blanking plates, for
example, are used to cover unused rack or cabinet slots and thus
enforce more efficient airflow within an individual rack. Blanking
plates, however, are often ignored, especially when equipment is fre-
quently moved or upgraded. Likewise, it is not uncommon to find
decommissioned equipment still racked up (and sometimes actually
powered on). Racked but unused equipment can disrupt air flow within
a cabinet and become a heat trap for heat generated by active hard-
ware. In raised floor data centers, decommissioned cabling can
disrupt cold air circulation and unsealed cable cutouts can result in
continuous and fruitless loss of cooling. Because the cooling plant
itself represents such a significant share of data center energy use,
even seemingly minor issues can quickly add up to major inefficien-
cies and higher energy bills.




The New Data Center                                                       13
Chapter 2: Running Hot and Cold


Economizers
Traditionally, data center cooling has been provided by large air condi-
tioning systems (computer room air conditioning, or CRAC) that used
CFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger-
ants. Since both CFCs and HCFCs are ozone depleting, current
systems use ozone-friendly refrigerants to minimize broader environ-
mental impact. Conventional CRAC systems, however, consume
significant amounts of energy and may account for nearly half of a
data center power bill. In addition, these systems are typically over-pro-
visioned to accommodate data center growth and consequently incur
a higher operational expense than is justified for the required cooling
capacity.
For new data centers in temperate or colder latitudes, economizers
can provide part or all of the cooling requirement. Economizer technol-
ogy dates to the mid-1800s but has seen a revival in response to rising
energy costs. As shown in Figure 6, an economizer (in this case, a dry-
side economizer) is essentially a heat exchanger that leverages cooler
outside ambient air temperature to cool the equipment racks.

                                                        Humidifier/
                                                        dehumidifier
                   Damper                Particulate
                                            filter



  Outside air




                                           Air return

Figure 6. An economizer uses the lower ambient temperature of out-
side air to provide cooling.

Use of outside air has its inherent problems. Data center equipment is
sensitive to particulates that can build up on circuit boards and con-
tribute to heating issues. An economizer may therefore incorporate
particulate filters to scrub the external air before the air flow enters the
data center. In addition, external air may be too humid or too dry for
data center use. Integrated humidifiers and dehumidifiers can condi-
tion the air flow to meet operational specifications for data center use.
As stated above, ASHRAE recommends 40 to 55% relative humidity.




14                                                             The New Data Center
Monitoring the Data Center Environment


Dry-side economizers depend on the external air supply temperature
to be sufficiently lower than the data center itself, and this may fluctu-
ate seasonally. Wet-side economizers thus include cooling towers as
part of the design to further condition the air supply for data center
use. Cooling towers present their own complications, which are tough,
especially in more arid geographies where water resources are expen-
sive and scarce. Ideally, economizers should leverage as much
recyclable resources as possible to accomplish the task of cooling
while reducing any collateral environmental impact.

Monitoring the Data Center Environment
Because vendor wattage and BTU specifications may assume maxi-
mum load conditions, using data sheet specifications or equipment
label declarations does not provide an accurate basis for calculating
equipment power draw or heat dissipation. An objective multi-point
monitoring system for measuring heat and humidity throughout the
data center is really the only means to observe and proactively
respond to changes in the environment.
A number of monitoring options are available today. For example,
some vendors are incorporating temperature probes into their equip-
ment design to provide continuous reporting of heat levels via
management software. Some solutions provide rack-mountable sys-
tems that include both temperature and humidity probes and
monitoring through a Web interface. Fujitsu offers a fiber optic system
that leverages the affect of temperature on light propagation to pro-
vide a multi-point probe using a single fiber optic cable strung
throughout equipment racks. Accuracy is reported to be within a half
degree Celsius and within 1 meter of the measuring point. In addition,
new monitoring software products can render a three-dimensional
view of temperature distribution across the entire data center, analo-
gous to an infrared photo of a heat source.
Although monitoring systems add cost to data center design, they are
invaluable diagnostic tools for fine-tuning airflow and equipment
placement to maximize cooling and keeping power and cooling costs
to a minimum. Many monitoring systems can be retrofitted to existing
data center plants so that even older sites can leverage new
technologies.




The New Data Center                                                       15
Chapter 2: Running Hot and Cold




16                                The New Data Center
Doing More with Less
                                                              3
Leveraging virtualization and blade server
technologies



Of the three primary components of an IT data center infrastructure—
servers, storage and network—servers are by far the most populous
and have the highest energy impact. Servers represent approximately
half of the IT equipment energy cost and about a quarter of the total
data center power bill. Server technology has therefore been a prime
candidate for regulation via EPA Energy Star and other market-driven
initiatives and has undergone a transformation in both hardware and
software. Server virtualization and blade server design, for example,
are distinct technologies fulfilling different goals but together have a
multiplying affect on server processing performance and energy effi-
ciency. In addition, multi-core processors and multi-processor
motherboards have dramatically increased server processing power in
a more compact footprint.

VMs Reborn
The concept of virtual machines dates back to mainframe days. To
maximize the benefit of mainframe processing, a single physical sys-
tem was logically partitioned into independent virtual machines. Each
VM ran its own operating system and applications in isolation although
the processor and peripherals could be shared. In today's usage, VMs
typically run on open systems servers and although direct-connect
storage is possible, shared storage on a SAN or NAS is the norm.
Unlike previous mainframe implementations, today's virtualization
software can support dozens of VMs on a single physical server. Typi-
cally, 10 or fewer VM instances are run per physical platform although
more powerful server platforms can support 20 or more VMs.




The New Data Center                                                  17
Chapter 3: Doing More with Less


The benefits of server virtualization are as obvious as the potential
risks. Running 10 VMs on a single server platform eliminates the need
for 9 additional servers with their associated cost, components, and
accompanying power draw and heat dissipation. For data centers with
hundreds or thousands of servers, virtualization offers an immediate
solution for server sprawl and ever increasing costs.
Like any virtualization strategy, however, the logical separation of VMs
must be maintained and access to server memory and external
peripherals negotiated to prevent conflicts or errors. VMs on a single
platform are hosted by a hypervisor layer which runs either directly
(Type 1 or native) on the server hardware or on top of (Type 2 or
hosted) the conventional operating system already running on the
server hardware.



            Application     Application      Application
                                                           Service
                                                           console
                OS                OS                OS



                                       Hypervisor


                                       Hardware
               CPU           Memory                 NIC    Storage I/O


Figure 7. A native or Type 1 hypervisor.

In a native Type 1 virtualization implementation, the hypervisor runs
directly on the server hardware as shown in Figure 7. This type of
hypervisor must therefore support all CPU, memory, network and stor-
age I/O traffic directly without the assistance of an underlying
operating system. The hypervisor is consequently written to a specific
CPU architecture (for open systems, typically an Intel x86 design) and
associated I/O. Clearly, one of the benefits of native hypervisors is that
overall latency can be minimized as individual VMs perform the normal
functions required by their applications. With the hypervisor directly
managing hardware resources, it is also less vulnerable over time to
code changes or updates that might be required if an underlying OS
were used.




18                                                            The New Data Center
VMs Reborn




             Application   Application     Application   Application



                 OS            OS                 OS         OS



                                    Hypervisor


                            Host Operating System

                                    Hardware
             CPU           Memory                NIC     Storage I/O


Figure 8. A hosted or Type 2 hypervisor.

As shown in Figure 8, a hosted or Type 2 server virtualization solution
is installed on top of the host operating system. The advantage of this
approach is that virtualization can be implemented on existing servers
to more fully leverage existing processing power and support more
applications in the same footprint. Given that the host OS and hypervi-
sor layer inserts additional steps between the VMs and the lower level
hardware, this hosted implementation incurs more latency than native
hypervisors. On the other hand, hosted hypervisors can readily support
applications with moderate performance requirements and still
achieve the objective of consolidating compute resources.
In both native and hosted hypervisor environments, the hypervisor
oversees the creation and activity of its VMs to ensure that each VM
has its requisite resources and does not interfere with the activity of
other VMs. Without the proper management of shared memory tables
by the hypervisor, for example, one VM instance could easily crash
another. The hypervisor must also manage the software traps created
to intercept hardware calls made by the guest OS and provide the
appropriate emulation of normal OS hardware access and I/O.
Because the hypervisor is now managing multiple virtual computers,
secure access to the hypervisor itself must be maintained. Efforts to
standardize server virtualization management for stable and secure
operation are being led by the Distributed Management Task Force
(DMTF) through its Virtualization Management Initiative (VMAN) and
through collaborative efforts by virtualization vendors and partner
companies.



The New Data Center                                                           19
Chapter 3: Doing More with Less


Server virtualization software is now available for a variety of CPUs,
hardware platforms and operating systems. Adoption for mid-tier, mod-
erate performance applications has been enabled by the availability of
economical dual-core CPUs and commodity rack-mount servers. High-
performance requirements can be met with multi-CPU platforms opti-
mized for shared processing. Although server virtualization has
steadily been gaining ground in large data centers, there has been
some reluctance to commit the most mission-critical applications to
VM implementations. Consequently, mid-tier applications have been
first in line and as these deployments become more pervasive and
proven, mission-critical applications will follow.
In addition to providing a viable means to consolidate server hardware
and reduce energy costs, server virtualization enables a degree of
mobility unachievable via conventional server management. Because
the virtual machine is now detached from the underlying physical pro-
cessing, memory, and I/O hardware, it is now possible to migrate a
virtual machine from one hardware platform to another non-disrup-
tively. If, for example, an application's performance is beginning to
exceed the capabilities of its shared physical host, it can be migrated
onto a less busy host or one that supports faster CPUs and I/O. This
application agility that initially was just an unintended by-product of
migrating virtual machines has become one of the compelling reasons
to invest in a virtual server solution. With ever-changing business,
workload and application priorities, the ability to quickly shift process-
ing resources where most needed is a competitive business
advantage.
As discussed in more detail below, virtual machine mobility creates
new opportunities for automating application distribution within the
virtual server pool and implementing policy-based procedures to
enforce priority handling of select applications over others. Communi-
cation between the virtualization manager and the fabric via APIs, for
example, enable proactive response to potential traffic congestion or
changes in the state of the network infrastructure. This further simpli-
fies management of application resources and ensures higher
availability.




20                                                      The New Data Center
Blade Server Architecture


Blade Server Architecture
Server consolidation in the new data center can also be achieved by
deploying blade server frames. The successful development of blade
server architecture has been dependent on the steady increase in CPU
processing power and solving basic problems around shared power,
cooling, memory, network, storage, and I/O resources. Although blade
servers are commonly associated with server virtualization, these are
distinct technologies that have a multiplying benefit when combined.
Blade server design strips away all but the most essential dedicated
components from the motherboard and provides shared assets as
either auxiliary special function blades or as part of the blade chassis
hardware. Consequently, the power consumption of each blade server
is dramatically reduced while power supply, fans and other elements
are shared with greater efficiency. A standard data center rack, for
example, can accommodate 42 1U conventional rack-mount servers,
but 128 or more blade servers in the same space. A single rack of
blade servers can therefore house the equivalent of 3 racks of conven-
tional servers; and although the cooling requirement for a fully
populated blade server rack may be greater than for a conventional
server rack, it is still less than the equivalent 3 racks that would other-
wise be required.
As shown in Figure 9, a blade server architecture offloads all compo-
nents that can be supplied by the chassis or by supporting specialized
blades. The blade server itself is reduced to one or more CPUs and
requisite auxiliary logic. The degree of component offload and avail-
ability of specialized blades varies from vendor to vendor, but the net
result is essentially the same. More processing power can now be
packed into a much smaller space and compute resources can be
managed more efficiently.
                                                                           Brocade Access Gateway




                                                                                                      Power
                   Power
                                  CPU / AUX logic
                                  CPU / AUX logic
                                  CPU / AUX logic
                                  CPU / AUX logic

                                                    CPU / AUX logic
                                                    CPU / AUX logic
                                                    CPU / AUX logic




                            Fan                                                                       supply
                                                                                Network I/O




   CPU             supply
                                                                                  Memory




                                                                                                       Fans
                      Network
   Memory               I/O
             Bus
     AUX              Storage
                                                       Bus

                                                                                                    External
                                                                                                      SAN
                                                                                                    storage


Figure 9. A blade server architecture centralizes shared resources
while reducing individual blade server elements.

The New Data Center                                                                                            21
Chapter 3: Doing More with Less


By significantly reducing the number of discrete components per pro-
cessing unit, the blade server architecture achieves higher efficiencies
in manufacturing, reduced consumption of resources, streamlined
design and reduced overall costs of provisioning and administration.
The unique value-add of each vendor's offering may leverage hot-swap
capability, variable-speed fans, variable-speed CPUs, shared memory
blades and consolidated network access. Brocade has long worked
with the major blade server manufacturers to provide optimized
Access Gateway and switch blades to centralize storage network capa-
bility and the specific features of these products will be discussed in
the next section.
Although consolidation ratios of 3:1 are impressive, much higher
server consolidation is achieved when blade servers are combined
with server virtualization software. A fully populated data center rack
of 128 blade servers, for example, could support 10 or more virtual
machines per blade for a total of 1280 virtual servers. That would be
the equivalent of 30 racks (at 42 servers per rack) of conventional 1U
rack-mount servers running one OS instance per server. From an
energy savings standpoint, that represents the elimination of over
1000 power supplies, fan units, network adapters, and other elements
that contribute to higher data center power bills and cooling load.
As a 2009 survey by blade.org shows, adoption of blade server tech-
nology has been increasing in both large data centers and small/
medium business (SMB) environments. Slightly less than half of the
data center respondents and approximately a third of SMB operations
have already implemented blade servers and over a third in both cate-
gories have deployment plans in place. With limited data center real
estate and increasing power costs squeezing data center budgets, the
combination of blade servers and server virtualization is fairly easy to
justify.

Brocade Server Virtualization Solutions
Whether on standalone servers or blade server frames, implementing
server virtualization has both upstream (client) and downstream (stor-
age) impact in the data center. Because Brocade offers a full spectrum
of products spanning LAN, WAN and SAN, it can help ensure that a
server virtualization deployment proactively addresses the new
requirements of both client and storage access. The value of a server
virtualization solution is thus amplified when combined with Brocade's
network technology.




22                                                    The New Data Center
Brocade Server Virtualization Solutions


To maximize the benefits of network connectivity in a virtualized server
environment, Brocade has worked with the major server virtualization
solutions and managers to deliver high performance, high availability,
security, energy efficiency, and streamlined management end to end.
The following Brocade solutions can enhance a server virtualization
deployment and help eliminate potential bottlenecks:
Brocade High-Performance 8 Gbps HBAs
In a conventional server, a host bus adapter (HBA) provides storage
access for a single operating system and its applications. In a virtual
server configuration, the HBA may be supporting 10 to 20 OS
instances, each running its own application. High performance is
therefore essential for enabling multiple virtual machines to share
HBA ports without congestion. The Brocade 815 (single port) and 825
HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and
500,000 I/Os per second (IOPS) performance per port to ensure the
maximum throughput for shared virtualized connectivity. Brocade
N_Port Trunking enables the 825 to deliver an unprecedented 16
Gbps bandwidth (3200 MBps) and one million IOPS performance. This
exceptional performance helps ensure that server virtualization con-
figurations can expand over time to accommodate additional virtual
machines without impacting the continuous operation of existing
applications.




Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for
an aggregate 16 Gbps bandwidth and 1000 IOPS.




The New Data Center                                                         23
Chapter 3: Doing More with Less


The Brocade 815 and 825 HBAs are further optimized for server virtu-
alization connectivity by supporting advanced intelligent services that
enable end-to-end visibility and management. As discussed below,
Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) and
integrated Quality of Service (QoS) provide powerful tools for simplify-
ing virtual machine deployments and providing proactive alerts directly
to server virtualization managers.
Brocade 8 Gbps Switch and Director Ports
In virtual server environments, the need for speed does not end at the
network or storage port. Because more traffic is now traversing fewer
physical links, building high-performance network infrastructures is a
prerequisite for maintaining non-disruptive, high-performance virtual
machine traffic flows. Brocade's support of 8 Gbps ports on both
switch and enterprise-class platforms enables customers to build high-
performance, non-blocking storage fabrics that can scale from small
VM configurations to enterprise-class data center deployments.
Designing high-performance fabrics ensures that applications running
on virtual machines are not exposed to bandwidth issues and can
accommodate high volume traffic patterns required for data backup
and other applications.
Brocade Virtual Machine SAN Boot
For both standalone physical servers and blade server environments,
the ability to boot from the storage network greatly simplifies virtual
machine deployment and migration of VM instances from one server
to another. As shown in Figure 11, SAN boot centralizes management
of boot images and eliminates the need for local storage on each phys-
ical server platform. When virtual machines are migrated from one
hardware platform to another, the boot images can be readily
accessed across the SAN via Brocade HBAs.




24                                                    The New Data Center
Brocade Server Virtualization Solutions



                                         ...                                  ...
  Boot                                  Servers
 images
                                                                           Brocade
  ...                            ...                                      825 HBAs
 Servers



                                           SAN
                                         switches
 Direct-
attached
 storage
  (DAS)
                                       Storage
                                        arrays

                                                  Boot images


Figure 11. SAN boot centralizes management of boot images and
facilitates migration of virtual machines between hosts.

Brocade 815 and 825 HBAs provide the ability to automatically
retrieve boot LUN parameters from a centralized fabric-based registry.
This eliminates the error-prone manual host-based configuration
scheme required by other HBA vendors. Brocade's SAN boot and boot
LUN discovery facilitates migration of virtual machines from host to
host, removes the need for local storage and improves reliability and
performance.
Brocade N_Port ID Virtualization for Workload
Optimization
In a virtual server environment, the individual virtual machine
instances are unaware of physical ports since the underlying hardware
has been abstracted by the hypervisor. This creates potential problems
for identifying traffic flows from virtual machines through shared phys-
ical ports. NPIV is an industry standard that enables multiple Fibre
Channel addresses to share a single physical Fibre Channel port. In a
server virtualization environment, NPIV allows each virtual machine
instance to have a unique World Wide Name (WWN) or virtual HBA
port. This in turn provides a level of granularity for identifying each VM
attached to the fabric for end-to-end monitoring, accounting, and con-
figuration. Because the WWN is now bound to an individual virtual
machine, the WWN follows the VM when it is migrated to another plat-
form. In addition, NPIV creates the linkage required for advanced
services such as QoS, security, and zoning as discussed in the next
section.




The New Data Center                                                            25
Chapter 3: Doing More with Less


Configuring Single Initiator/Target Zoning
Brocade has been a pioneer in fabric-based zoning to segregate fabric
traffic and restrict visibility of storage resources to only authorized
hosts. As a recognized best practice for server to storage configura-
tion, NPIV and single initiator/target zoning ensures that individual
virtual machines have access only to their designated storage assets.
This feature minimizes configuration errors during VM migration and
extends the management visibility of fabric connections to specific vir-
tual machines.
Brocade End-to-End Quality of Service
The combination of NPIV and zoning functionality on Brocade HBAs
and switches provides the foundation for higher-level fabric services
including end-to-end QoS. Because the traffic flows from each virtual
machine can be identified by virtual WWN and segregated via zoning,
each can be assigned a delivery priority (low, medium or high) that is
enforced fabric-wide from the host connection to the storage port, as
shown in Figure 12.

       QoS Priorities       App 1 App 2 App 3 App 4
             High
             Medium
             Low
                                                       Virtual Channels technology
                                                       enables QoS at the ASIC
                                                       level in the HBA
                 Default QoS                   HBA
                      priority                        Frame-level interleaving of
                  is Medium                           outbound data maximizes
                                                      initiator link utilization




Figure 12. Brocade's QoS enforces traffic prioritization from the server
HBA to the storage port across the fabric.

While some applications running on virtual machines are logical candi-
dates for QoS prioritization (for example, SQL Server), Brocade's Top
Talkers management feature can help identify which VM applications
may require priority treatment. Because Brocade end-to-end QoS is ulti-
mately tied to the virtual machine's virtualized WWN address, the QoS
assignment follows the VM if it is migrated from one hardware platform

26                                                                 The New Data Center
Brocade Server Virtualization Solutions


to another. This feature ensures that applications enjoy non-disruptive
data access despite adds/moves and changes to the downstream envi-
ronment and enables administrators to more easily fulfill client service-
level agreements (SLAs).
Brocade LAN and SAN Security
Most companies are now subject to government regulations that man-
date the protection and security of customer data transactions. Planning
a virtualization deployment must therefore also account for basic secu-
rity mechanisms for both client and storage access. Brocade offers a
broad spectrum of security solutions, including LAN and WAN-based
technologies and storage-specific SAN security features. For example,
Brocade SecureIron products, shown in Figure 13, provide firewall traffic
management and LAN security to safeguard access from clients to vir-
tual hosts on the IP network.




Figure 13. Brocade SecureIron switches provide firewall traffic man-
agement and LAN security for client access to virtual server clusters.

Brocade SAN security features include authentication via access control
lists (ACLs) and role-based access control (RBAC) as well as security
mechanisms for authenticating connectivity of switch ports and devices
to fabrics. In addition, the Brocade Encryption Switch, shown in
Figure 14, and FS8-18 Encryption Blade for the Brocade DCX Backbone
platform provide high-performance (96 Gbps) data encryption for data-
at-rest. Brocade's security environment thus protects data-in-flight from
client to virtual host as well as data written to disk across the SAN.




Figure 14. The Brocade Encryption Switch provides high-performance
data encryption to safeguard data written to disk or tape.

The New Data Center                                                         27
Chapter 3: Doing More with Less


Brocade Access Gateway for Blade Frames
Server virtualization software can be installed on conventional server
platforms or blade server frames. Blade server form factors offer the
highest density for consolidating IT processing in the data center and
leverage shared resources across the backplane. To optimize storage
access from blade server frames, Brocade has partnered with blade
server providers to create high-performance, high-availability Access
Gateway blades for Fibre Channel connectivity to the SAN. Brocade
Access Gateway technology leverages NPIV to simplify virtual machine
addressing and F_Port Trunking for high utilization and automatic link
failover. By integrating SAN connectivity into a virtualized blade server
chassis, Brocade helps to streamline deployment and simplify manage-
ment while reducing overall costs.
The Energy-Efficient Brocade DCX Backbone Platform for
Consolidation
With 4x the performance and over 10x the energy efficiency of other
SAN directors, the Brocade DCX delivers the high performance required
for virtual server implementation and can accommodate growth in VM
environments in a compact footprint. The Brocade DCX supports 384
ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speed
inter-chassis links (ICLs) allow further expansion of the SAN core for
scaling to meet the requirements of very large server virtualization
deployments. The Brocade DCX is also designed to non-disruptively inte-
grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging
(DCB) for future virtual server connectivity. The Brocade DCX is also
available in a 192-port configuration (as the Brocade DCX-4S) to support
medium VM configurations, while providing the same high availability,
performance, and advanced SAN services.
The Brocade DCX's Adaptive Networking services for QoS, ingress rate
limiting, congestion detection, and management ensure that traffic
streams from virtual machines are proactively managed throughout the
fabric and accommodate the varying requirements of upper-layer busi-
ness applications. Adaptive Networking services provide greater agility
in managing application workloads as they migrate between physical
servers.




28                                                     The New Data Center
Brocade Server Virtualization Solutions


Enhanced and Secure Client Access with Brocade LAN
Solutions
Brocade offers a full line of sophisticated LAN switches and routers for
Ethernet and IP traffic from Layer 2/3 to Layer 4–7 application switch-
ing. This product suite is the natural complement to Brocade's robust
SAN products and enables customers to build full-featured and secure
networks end to end. As with the Brocade DCX architecture for SANs,
Brocade BigIron RX, shown in Figure 15, and FastIron SuperX switches
incorporate best-in-class functionality and low power consumption to
deliver high-performance core switching for data center LAN backbones.




Figure 15. Brocade BigIron RX platforms offer high-performance Layer
2/3 switching in three compact, energy-efficient form factors.

Brocade edge switches with Power over Ethernet (PoE) support enable
customers to integrate a wide variety of IP business applications, includ-
ing voice over IP (VoIP), wireless access points, and security monitoring.
Brocade SecureIron switches bring advanced security protection for cli-
ent access into virtualized server clusters, while Brocade ServerIron
switches provide Layer 4–7 application switching and load balancing.
Brocade LAN solutions provide up to 10 Gbps throughput per port and
so can accommodate the higher traffic loads typical of virtual machine
environments.
Brocade Industry Standard SMI-S Monitoring
Virtual server deployments dramatically increase the number of data
flows and requisite bandwidth per physical server or blade server.
Because server virtualization platforms can support dynamic migration
of application workloads between physical servers, complex traffic pat-
terns are created and unexpected congestion can occur. This
complicates server management and can impact performance and
availability. Brocade can proactively address these issues by integrating
communication between Brocade intelligent fabric services with VM

The New Data Center                                                          29
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency
The New Data Center: Technologies for Consolidation and Efficiency

Más contenido relacionado

Similar a The New Data Center: Technologies for Consolidation and Efficiency

Virtualization for Dummies
Virtualization for DummiesVirtualization for Dummies
Virtualization for DummiesLiberteks
 
Rich User Experience by Calibra
Rich User Experience by CalibraRich User Experience by Calibra
Rich User Experience by CalibraLuristic
 
XCC Documentation Release 13.0
XCC Documentation Release 13.0XCC Documentation Release 13.0
XCC Documentation Release 13.0TIMETOACT GROUP
 
XCC 12.0 - Documentation
XCC 12.0 - DocumentationXCC 12.0 - Documentation
XCC 12.0 - DocumentationTIMETOACT GROUP
 
XCC 11.0 - Documentation
XCC 11.0 - DocumentationXCC 11.0 - Documentation
XCC 11.0 - DocumentationTIMETOACT GROUP
 
Windows xp wireless deployment technology and component overview
Windows xp wireless deployment technology and component overviewWindows xp wireless deployment technology and component overview
Windows xp wireless deployment technology and component overviewRossMob1
 
Fastiron 08040-l2guide
Fastiron 08040-l2guideFastiron 08040-l2guide
Fastiron 08040-l2guideMP Casanova
 
Comandos AT Para Celulares
Comandos AT Para CelularesComandos AT Para Celulares
Comandos AT Para CelularesVictpr Sanchez
 
Comandos AT para Celulares
Comandos AT para CelularesComandos AT para Celulares
Comandos AT para Celularesguest5b41fb
 
NIST Special Publication 500-293: US Government Cloud Computing Technology R...
 NIST Special Publication 500-293: US Government Cloud Computing Technology R... NIST Special Publication 500-293: US Government Cloud Computing Technology R...
NIST Special Publication 500-293: US Government Cloud Computing Technology R...David Sweigert
 
AT&T Shape Hackathon Kick-off
AT&T Shape Hackathon Kick-offAT&T Shape Hackathon Kick-off
AT&T Shape Hackathon Kick-offEd Donahue
 
brocade-ip-fabric-bvd-published
brocade-ip-fabric-bvd-publishedbrocade-ip-fabric-bvd-published
brocade-ip-fabric-bvd-publishedAnuj Dewangan
 
Principles_of_Structural_Design_Wood_Ste.pdf
Principles_of_Structural_Design_Wood_Ste.pdfPrinciples_of_Structural_Design_Wood_Ste.pdf
Principles_of_Structural_Design_Wood_Ste.pdfSarwary2
 
DCD INTERNET 2015 BROCHURE
DCD INTERNET 2015 BROCHUREDCD INTERNET 2015 BROCHURE
DCD INTERNET 2015 BROCHUREDCDNA
 

Similar a The New Data Center: Technologies for Consolidation and Efficiency (20)

Virtualization for Dummies
Virtualization for DummiesVirtualization for Dummies
Virtualization for Dummies
 
Rich User Experience by Calibra
Rich User Experience by CalibraRich User Experience by Calibra
Rich User Experience by Calibra
 
Dotgroup brochure
Dotgroup brochureDotgroup brochure
Dotgroup brochure
 
Proton ds userguide
Proton ds userguideProton ds userguide
Proton ds userguide
 
XCC Documentation Release 13.0
XCC Documentation Release 13.0XCC Documentation Release 13.0
XCC Documentation Release 13.0
 
XCC 12.0 - Documentation
XCC 12.0 - DocumentationXCC 12.0 - Documentation
XCC 12.0 - Documentation
 
Broadband Technology
Broadband TechnologyBroadband Technology
Broadband Technology
 
XCC 11.0 - Documentation
XCC 11.0 - DocumentationXCC 11.0 - Documentation
XCC 11.0 - Documentation
 
Windows xp wireless deployment technology and component overview
Windows xp wireless deployment technology and component overviewWindows xp wireless deployment technology and component overview
Windows xp wireless deployment technology and component overview
 
Fastiron 08040-l2guide
Fastiron 08040-l2guideFastiron 08040-l2guide
Fastiron 08040-l2guide
 
Comandos AT Para Celulares
Comandos AT Para CelularesComandos AT Para Celulares
Comandos AT Para Celulares
 
Comandos AT para Celulares
Comandos AT para CelularesComandos AT para Celulares
Comandos AT para Celulares
 
NIST Special Publication 500-293: US Government Cloud Computing Technology R...
 NIST Special Publication 500-293: US Government Cloud Computing Technology R... NIST Special Publication 500-293: US Government Cloud Computing Technology R...
NIST Special Publication 500-293: US Government Cloud Computing Technology R...
 
AT&T Shape Hackathon Kick-off
AT&T Shape Hackathon Kick-offAT&T Shape Hackathon Kick-off
AT&T Shape Hackathon Kick-off
 
Oracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_startedOracle_9i_Database_Getting_started
Oracle_9i_Database_Getting_started
 
brocade-ip-fabric-bvd-published
brocade-ip-fabric-bvd-publishedbrocade-ip-fabric-bvd-published
brocade-ip-fabric-bvd-published
 
ISE-802.1X-MAB
ISE-802.1X-MABISE-802.1X-MAB
ISE-802.1X-MAB
 
Principles_of_Structural_Design_Wood_Ste.pdf
Principles_of_Structural_Design_Wood_Ste.pdfPrinciples_of_Structural_Design_Wood_Ste.pdf
Principles_of_Structural_Design_Wood_Ste.pdf
 
DCD INTERNET 2015 BROCHURE
DCD INTERNET 2015 BROCHUREDCD INTERNET 2015 BROCHURE
DCD INTERNET 2015 BROCHURE
 
ngscb
ngscbngscb
ngscb
 

Último

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 

Último (20)

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 

The New Data Center: Technologies for Consolidation and Efficiency

  • 1.
  • 2. THE NEW DATA CENTER FIRST EDITION New technologies are radically reshaping the data center TOM CLARK
  • 3.
  • 4. Tom Clark, 1947–2010 All too infrequently we have the true privilege of knowing a friend and colleague like Tom Clark. We mourn the passing of a special person, a man who was inspired as well as inspiring, an intelligent and articulate man, a sincere and gentle person with enjoyable humor, and someone who was respected for his great achievements. We will always remember the endearing and rewarding experiences with Tom and he will be greatly missed by those who knew him. Mark S. Detrick
  • 5. © 2010 Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, BigIron, DCFM, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, TurboIron, and Wingspan are registered trademarks, and Brocade Assurance, Brocade NET Health, Brocade One, Extraordinary Networks, MyBrocade, and VCS are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned are or may be trademarks or service marks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. Brocade Bookshelf Series designed by Josh Judd The New Data Center Written by Tom Clark Reviewed by Brook Reams Edited by Victoria Thomas Design and Production by Victoria Thomas Illustrated by Jim Heuser, David Lehmann, and Victoria Thomas Printing History First Edition, August 2010 iv The New Data Center
  • 6. Important Notice Use of this book constitutes consent to the following conditions. This book is supplied “AS IS” for informational purposes only, without warranty of any kind, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this book at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this book may require an export license from the United States government. Brocade Corporate Headquarters San Jose, CA USA T: +01-408-333-8000 info@brocade.com Brocade European Headquarters Geneva, Switzerland T: +41-22-799-56-40 emea-info@brocade.com Brocade Asia Pacific Headquarters Singapore T: +65-6538-4700 apac-info@brocade.com Acknowledgements I would first of all like to thank Ron Totah, Senior Director of Marketing at Brocade and cat-herder of the Global Solutions Architects, a.k.a. Solutioneers. Ron's consistent support and encouragement for the Brocade Bookshelf projects and Brocade TechBytes Webcast series provides sustained momentum for getting technical information into the hands of our customers. The real work of project management, copyediting, content generation, assembly, publication, and promotion is done by Victoria Thomas, Technical Marketing Manager at Brocade. Without Victoria's steadfast commitment, none of this material would see the light of day. I would also like to thank Brook Reams, Solution Architect for Applications on the Integrated Marketing team, for reviewing my draft manuscript and providing suggestions and invaluable insights on the technologies under discussion. Finally, a thank you to the entire Brocade team for making this a first-class company that produces first-class products for first-class customers worldwide. The New Data Center v
  • 7. About the Author Tom Clark was a resident SAN evangelist for Brocade and represented Brocade in industry associations, conducted seminars and tutorials at conferences and trade shows, promoted Brocade storage networking solutions, and acted as a customer liaison. A noted author and industry advocate of storage networking technology, he was a board member of the Storage Networking Industry Association (SNIA) and former Chair of the SNIA Green Storage Initiative. Clark has published hundreds of articles and white papers on storage networking and is the author of Designing Storage Area Networks, Second Edition (Addison-Wesley 2003, IP SANs: A Guide to iSCSI, iFCP and FCIP Protocols for Storage Area Networks (Addison-Wesley 2001), Storage Virtualization: Technologies for Simplifying Data Storage and Management (Addison-Wesley 2005), and Strategies for Data Protection (Brocade Bookshelf, 2008). Prior to joining Brocade, Clark was Director of Solutions and Technologies for McDATA Corporation and the Director of Technical Marketing for Nishan Systems, the innovator of storage over IP technology. As a liaison between marketing, engineering, and customers, he focused on customer education and defining features that ensure productive deployment of SANs. With more than 20 years experience in the IT industry, Clark held technical marketing and systems consulting positions with storage networking and other data communications companies. Sadly, Tom Clark passed away in February 2010. Anyone who knew Tom knows that he was intelligent, quick, a voice of sanity and also sarcasm, and a pragmatist with a great heart. He was indeed the heart of Brocade TechBytes, a monthly Webcast he described as “a late night technical talk show,” which was launched in November 2008 and is still part of Brocade’s Technical Marketing program. vi The New Data Center
  • 8. Contents Preface ....................................................................................................... xv Chapter 1: Supply and Demand ..............................................................1 Chapter 2: Running Hot and Cold ...........................................................9 Energy, Power, and Heat ...................................................................................... 9 Environmental Parameters ................................................................................10 Rationalizing IT Equipment Distribution ............................................................11 Economizers ........................................................................................................14 Monitoring the Data Center Environment .........................................................15 Chapter 3: Doing More with Less ......................................................... 17 VMs Reborn ......................................................................................................... 17 Blade Server Architecture ..................................................................................21 Brocade Server Virtualization Solutions ...........................................................22 Brocade High-Performance 8 Gbps HBAs .................................................23 Brocade 8 Gbps Switch and Director Ports ..............................................24 Brocade Virtual Machine SAN Boot ...........................................................24 Brocade N_Port ID Virtualization for Workload Optimization ..................25 Configuring Single Initiator/Target Zoning ................................................26 Brocade End-to-End Quality of Service ......................................................26 Brocade LAN and SAN Security .................................................................27 Brocade Access Gateway for Blade Frames ..............................................28 The Energy-Efficient Brocade DCX Backbone Platform for Consolidation ..............................................................................................28 Enhanced and Secure Client Access with Brocade LAN Solutions .........29 Brocade Industry Standard SMI-S Monitoring ..........................................29 Brocade Professional Services ..................................................................30 FCoE and Server Virtualization ..........................................................................31 Chapter 4: Into the Pool ........................................................................ 35 Optimizing Storage Capacity Utilization in the Data Center .............................35 Building on a Storage Virtualization Foundation ..............................................39 Centralizing Storage Virtualization from the Fabric .......................................... 41 Brocade Fabric-based Storage Virtualization ...................................................43 The New Data Center vii
  • 9. Contents Chapter 5: Weaving a New Data Center Fabric ................................. 45 Better Fewer but Better ......................................................................................46 Intelligent by Design ...........................................................................................48 Energy Efficient Fabrics ......................................................................................53 Safeguarding Storage Data ................................................................................55 Multi-protocol Data Center Fabrics ....................................................................58 Fabric-based Disaster Recovery ........................................................................64 Chapter 6: The New Data Center LAN ................................................. 69 A Layered Architecture ....................................................................................... 71 Consolidating Network Tiers .............................................................................. 74 Design Considerations .......................................................................................75 Consolidate to Accommodate Growth .......................................................75 Network Resiliency .....................................................................................76 Network Security .........................................................................................77 Power, Space and Cooling Efficiency .........................................................78 Network Virtualization ................................................................................79 Application Delivery Infrastructure ....................................................................80 Chapter 7: Orchestration ....................................................................... 83 Chapter 8: Brocade Solutions Optimized for Server Virtualization . 89 Server Adapters ..................................................................................................89 Brocade 825/815 FC HBA .........................................................................90 Brocade 425/415 FC HBA .........................................................................91 Brocade FCoE CNAs ....................................................................................91 Brocade 8000 Switch and FCOE10-24 Blade ..................................................92 Access Gateway ..................................................................................................93 Brocade Management Pack ..............................................................................94 Brocade ServerIron ADX .....................................................................................95 Chapter 9: Brocade SAN Solutions ...................................................... 97 Brocade DCX Backbones (Core) ........................................................................98 Brocade 8 Gbps SAN Switches (Edge) ........................................................... 100 Brocade 5300 Switch ...............................................................................101 Brocade 5100 Switch .............................................................................. 102 Brocade 300 Switch ................................................................................ 103 Brocade VA-40FC Switch ......................................................................... 104 Brocade Encryption Switch and FS8-18 Encryption Blade ........................... 105 Brocade 7800 Extension Switch and FX8-24 Extension Blade .................... 106 Brocade Optical Transceiver Modules .............................................................107 Brocade Data Center Fabric Manager ............................................................ 108 Chapter 10: Brocade LAN Network Solutions ..................................109 Core and Aggregation ...................................................................................... 110 Brocade NetIron MLX Series ................................................................... 110 Brocade BigIron RX Series ...................................................................... 111 viii The New Data Center
  • 10. Contents Access .............................................................................................................. 112 Brocade TurboIron 24X Switch ................................................................ 112 Brocade FastIron CX Series ..................................................................... 113 Brocade NetIron CES 2000 Series ......................................................... 113 Brocade FastIron Edge X Series ............................................................. 114 Brocade IronView Network Manager .............................................................. 115 Brocade Mobility .............................................................................................. 116 Chapter 11: Brocade One ....................................................................117 Evolution not Revolution ..................................................................................117 Industry's First Converged Data Center Fabric .............................................. 119 Ethernet Fabric ........................................................................................ 120 Distributed Intelligence ........................................................................... 120 Logical Chassis ........................................................................................ 121 Dynamic Services .................................................................................... 121 The VCS Architecture ....................................................................................... 122 Appendix A: “Best Practices for Energy Efficient Storage Operations” .............................................................................................123 Introduction ...................................................................................................... 123 Some Fundamental Considerations ............................................................... 124 Shades of Green .............................................................................................. 125 Best Practice #1: Manage Your Data ..................................................... 126 Best Practice #2: Select the Appropriate Storage RAID Level .............. 128 Best Practice #3: Leverage Storage Virtualization ................................ 129 Best Practice #4: Use Data Compression .............................................. 130 Best Practice #5: Incorporate Data Deduplication ................................131 Best Practice #6: File Deduplication .......................................................131 Best Practice #7: Thin Provisioning of Storage to Servers .................... 132 Best Practice #8: Leverage Resizeable Volumes .................................. 132 Best Practice #9: Writeable Snapshots ................................................. 132 Best Practice #10: Deploy Tiered Storage ............................................. 133 Best Practice #11: Solid State Storage .................................................. 133 Best Practice #12: MAID and Slow-Spin Disk Technology .................... 133 Best Practice #13: Tape Subsystems ..................................................... 134 Best Practice #14: Fabric Design ........................................................... 134 Best Practice #15 - File System Virtualization ....................................... 134 Best Practice #16: Server, Fabric and Storage Virtualization .............. 135 Best Practice #17: Flywheel UPS Technology ........................................ 135 Best Practice #18: Data Center Air Conditioning Improvements ......... 136 Best Practice #19: Increased Data Center temperatures .................... 136 Best Practice #20: Work with Your Regional Utilities .............................137 What the SNIA is Doing About Data Center Energy Usage .............................137 About the SNIA ................................................................................................. 138 Appendix B: Online Sources .................................................................139 Glossary ..................................................................................................141 Index ........................................................................................................153 The New Data Center ix
  • 11. Contents x The New Data Center
  • 12. Figures Figure 1. The ANSI/TIA-942 standard functional area connectivity. ................ 3 Figure 2. The support infrastructure adds substantial cost and energy over- head to the data center. ...................................................................................... 4 Figure 3. Hot aisle/cold aisle equipment floor plan. .......................................11 Figure 4. Variable speed fans enable more efficient distribution of cooling. 12 Figure 5. The concept of work cell incorporates both equipment power draw and requisite cooling. .........................................................................................13 Figure 6. An economizer uses the lower ambient temperature of outside air to provide cooling. ...................................................................................................14 Figure 7. A native or Type 1 hypervisor. ...........................................................18 Figure 8. A hosted or Type 2 hypervisor. ..........................................................19 Figure 9. A blade server architecture centralizes shared resources while reduc- ing individual blade server elements. ...............................................................21 Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an ag- gregate 16 Gbps bandwidth and 1000 IOPS. ..................................................23 Figure 11. SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts. .................................................25 Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric. ....................................................................26 Figure 13. Brocade SecureIron switches provide firewall traffic management and LAN security for client access to virtual server clusters. ..........................27 Figure 14. The Brocade Encryption Switch provides high-performance data en- cryption to safeguard data written to disk or tape. ..........................................27 Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact, energy-efficient form factors. .............................29 Figure 16. FCoE simplifies the server cable plant by reducing the number of network interfaces required for client, peer-to-peer, and storage access. ....31 Figure 17. An FCoE top-of-rack solution provides both DCB and Fibre Channel ports and provides protocol conversion to the data center SAN. ...................32 The New Data Center xi
  • 13. Figures Figure 18. Brocade 1010 and 1020 CNAs and the Brocade 8000 Switch facil- itate a compact, high-performance FCoE deployment. ....................................33 Figure 19. Conventional storage configurations often result in over- and under- utilization of storage capacity across multiple storage arrays. .......................36 Figure 20. Storage virtualization aggregates the total storage capacity of mul- tiple physical arrays into a single virtual pool. ..................................................37 Figure 21. The virtualization abstraction layer provides virtual targets to real hosts and virtual hosts to real targets. .............................................................38 Figure 22. Leveraging classes of storage to align data storage to the business value of data over time. .....................................................................................40 Figure 23. FAIS splits the control and data paths for more efficient execution of metadata mapping between virtual storage and servers. ..........................42 Figure 24. The Brocade FA4-18 Application Blade provides line-speed metada- ta map execution for non-disruptive storage pooling, mirroring and data migra- tion. ......................................................................................................................43 Figure 25. A storage-centric core/edge topology provides flexibility in deploying servers and storage assets while accommodating growth over time. ............47 Figure 26. Brocade QoS gives preferential treatment to high-value applications through the fabric to ensure reliable delivery. ..................................................49 Figure 27. Ingress rate limiting enables the fabric to alleviate potential conges- tion by throttling the transmission rate of the offending initiator. ..................50 Figure 28. Preferred paths are established through traffic isolation zones, which enforce separation of traffic through the fabric based on designated applications. ........................................................................................................51 Figure 29. By monitoring traffic activity on each port, Top Talkers can identify which applications would most benefit from Adaptive Networking services. 52 Figure 30. Brocade DCX power consumption at full speed on an 8 Gbps port compared to the competition. ...........................................................................54 Figure 31. The Brocade Encryption Switch provides secure encryption for disk or tape. ................................................................................................................56 Figure 32. Using fabric ACLs to secure switch and device connectivity. .......58 Figure 33. Integrating formerly standalone mid-tier servers into the data center fabric with an iSCSI blade in the Brocade DCX. ...............................................61 Figure 34. Using Virtual Fabrics to isolate applications and minimize fabric- wide disruptions. ................................................................................................62 Figure 35. IR facilitates resource sharing between physically independent SANs. ...................................................................................................................64 Figure 36. Long-distance connectivity options using Brocade devices. ........67 Figure 37. Access, aggregation, and core layers in the data center network. ...............................................................................................................71 Figure 38. Access layer switch placement is determined by availability, port density, and cable strategy. ...............................................................................73 xii The New Data Center
  • 14. Figures Figure 39. A Brocade BigIron RX Series switch consolidates connectivity in a more energy efficient footprint. .........................................................................75 Figure 40. Network infrastructure typically contributes only 10% to 15% of total data center IT equipment power usage. ...........................................................79 Figure 41. Application congestion (traffic shown as a dashed line) on a Web- based enterprise application infrastructure. ....................................................80 Figure 42. Application workload balancing, protocol processing offload and se- curity via the Brocade ServerIron ADX. .............................................................81 Figure 43. Open systems-based orchestration between virtualization domains. ..............................................................................................................84 Figure 44. Brocade Management Pack for Microsoft Service Center Virtual Machine Manager leverages APIs between the SAN and SCVMM to trigger VM migration. ............................................................................................................86 Figure 45. Brocade 825 FC 8 Gbps HBA (dual ports shown). ........................90 Figure 46. Brocade 415 FC 4 Gbps HBA (single port shown). .......................91 Figure 47. Brocade 1020 (dual ports) 10 Gbps Fibre Channel over Ethernet-to- PCIe CNA. ............................................................................................................92 Figure 48. Brocade 8000 Switch. ....................................................................92 Figure 49. Brocade FCOE10-24 Blade. ............................................................93 Figure 50. SAN Call Home events displayed in the Microsoft System Center Operations Center interface. .............................................................................94 Figure 51. Brocade ServerIron ADX 1000. ......................................................95 Figure 52. Brocade DCX (left) and DCX-4S (right) Backbone. ........................98 Figure 53. Brocade 5300 Switch. ................................................................. 101 Figure 54. Brocade 5100 Switch. ................................................................. 102 Figure 55. Brocade 300 Switch. .................................................................... 103 Figure 56. Brocade VA-40FC Switch. ............................................................ 104 Figure 57. Brocade Encryption Switch. ......................................................... 105 Figure 58. Brocade FS8-18 Encryption Blade. ............................................. 105 Figure 59. Brocade 7800 Extension Switch. ................................................ 106 Figure 60. Brocade FX8-24 Extension Blade. ............................................... 107 Figure 61. Brocade DCFM main window showing the topology view. ......... 108 Figure 62. Brocade NetIron MLX-4. ............................................................... 110 Figure 63. Brocade BigIron RX-16. ................................................................ 111 Figure 64. Brocade TurboIron 24X Switch. ................................................... 112 Figure 65. Brocade FastIron CX-624S-HPOE Switch. ................................... 113 Figure 66. Brocade NetIron CES 2000 switches, 24- and 48-port configura- tions in both Hybrid Fiber (HF) and RJ45 versions. ....................................... 114 Figure 67. Brocade FastIron Edge X 624. ..................................................... 114 The New Data Center xiii
  • 15. Figures Figure 68. Brocade INM Dashboard (top) and Backup Configuration Manager (bottom). ........................................................................................................... 115 Figure 69. The pillars of Brocade VCS (detailed in the next section). ......... 118 Figure 70. A Brocade VCS reference network architecture. ........................ 122 xiv The New Data Center
  • 16. Preface Data center administrators today are facing unprecedented chal- lenges. Business applications are shifting from conventional client/ server relationships to Web-based applications, data center real estate is at a premium, energy costs continue to escalate, new regula- tions are imposing more rigorous requirements for data protection and security, and tighter corporate budgets are making it difficult to accommodate client demands for more applications and data storage. Since all major enterprises run their businesses on the basis of digital information, the consequences of inadequate processing power, stor- age, network accessibility, or data availability can have a profound impact on the viability of the enterprise itself. At the same time, new technologies that promise to alleviate some of these issues require both capital expenditures and a sharp learning curve to successfully integrate new solutions that can increase produc- tivity and lower ongoing operational costs. The ability to quickly adapt new technologies to new problems is essential for creating a more flex- ible data center strategy that can meet both current and future requirements. This effort necessitates cooperation between both data center administrators and vendors and between the multiple vendors responsible for providing the elements that compose a comprehensive data center solution. The much overused term “ecosystem” is nonetheless an accurate description of the interdependencies of technologies required for twenty-first century data center operation. No single vendor manufac- tures the full spectrum of hardware and software elements required to drive data center IT processing. This is especially true when each of the three major domains of IT operations -server, storage, and net- working-are each undergoing profound technical evolution in the form of virtualization. Not only must products be designed and tested for The New Data Center xv
  • 17. standards compliance and multi-vendor operability, but management between the domains must be orchestrated to ensure stable opera- tions and coordination of tasks. Brocade has a long and proven track record in data center network innovation and collaboration with partners to create new solutions to solve real problems and at the same time reducing deployment and operational costs. This book provides an overview of the new technolo- gies that are radically transforming the data center into a more cost- effective corporate asset and the specific Brocade products that can help you achieve this goal. The book is organized as follows: • “Chapter 1: Supply and Demand” starting on page 1 examines the technological and business drivers that are forcing changes in the conventional data center paradigm. Due to increased business demands (even in difficult economic times), data centers are run- ning out of space and power and this in turn is driving new initiatives for server, storage and network consolidation. • “Chapter 2: Running Hot and Cold” starting on page 9 looks at data center power and cooling issues that threaten productivity and operational budgets. New technologies such as wet and dry- side economizers, hot aisle/cold aisle rack deployment, and proper sizing of the cooling plant can help maximize productive use of existing real estate and reduce energy overhead. • “Chapter 3: Doing More with Less” starting on page 17 provides an overview of server virtualization and blade server technology. Server virtualization, in particular, is moving from secondary to pri- mary applications and requires coordination with upstream networking and downstream storage for successful implementa- tion. Brocade has developed a suite of new technologies to leverage the benefits of server virtualization and coordinate oper- ation between virtual machine managers and the LAN and SAN networks. • “Chapter 4: Into the Pool” starting on page 35 reviews the poten- tial benefits of storage virtualization for maximizing utilization of storage assets and automating life cycle management. xvi The New Data Center
  • 18. “Chapter 5: Weaving a New Data Center Fabric” starting on page 45 examines the recent developments in storage networking technology, including higher bandwidth, fabric virtualization, enhanced security, and SAN extension. Brocade continues to pio- neer more productive solutions for SANs and is the author or co- author of the significant standards underlying these new technologies. • “Chapter 6: The New Data Center LAN” starting on page 69 high- lights the new challenges that virtualization and Web-based applications present to the data communications network. Prod- ucts like the Brocade ServerIron ADX Series of application delivery controller provide more intelligence in the network to offload server protocol processing and provide much higher levels of avail- ability and security. • “Chapter 7: Orchestration” starting on page 83 focuses on the importance of standards-based coordination between server, stor- age and network domains so that management frameworks can provide a comprehensive view of the entire infrastructure and pro- actively address potential bottlenecks. • Chapters 8, 9, and 10 provide brief descriptions of Brocade prod- ucts and technologies that have been developed to solve data center problems. • “Chapter 11: Brocade One” starting on page 117 described a new Brocade direction and innovative technologies to simplify the com- plexity of virtualized data centers. • “Appendix A: “Best Practices for Energy Efficient Storage Opera- tions”” starting on page 123 is a reprint of an article written by Tom Clark and Dr. Alan Yoder, NetApp, for the SNIA Green Storage Initiative (GSI). • “Appendix B: Online Sources” starting on page 139 is a list of online resources. • The “Glossary” starting on page 141 is a list of data center net- work terms and definitions. The New Data Center xvii
  • 19. xviii The New Data Center
  • 20. Supply and Demand 1 The collapse of the old data center paradigm As in other social and economic sectors, information technology has recently found itself in the awkward position of having lived beyond its means. The seemingly endless supply of affordable real estate, elec- tricity, data processing equipment, and technical personnel enabled companies to build large data centers to house their mainframe and open systems infrastructures and to support the diversity of business applications typical of modern enterprises. In the new millennium, however, real estate has become prohibitively expensive, the cost of energy has skyrocketed, utilities are often incapable of increasing sup- ply to existing facilities, data processing technology has become more complex, and the pool of technical talent to support new technologies is shrinking. At the same time, the increasing dependence of companies and insti- tutions on electronic information and communications has resulted in a geometric increase in the amount of data that must be managed and stored. Since 2000, the amount of corporate data generated worldwide has grown from 5 exabytes (5 billion gigabytes) to over 300 exabytes, with projections of about 1 zetabyte (1000 exabytes) by 2010. This data must be stored somewhere. The installation of more servers and disk arrays to accommodate data growth is simply not sus- tainable as data centers run out of floor space, cooling capacity, and energy to feed additional hardware. The demands constantly placed on IT administrators to expand support for new applications and data are now in direct conflict with the supply of data center space and power. Gartner predicted that by 2009, half of the world's data centers will not have sufficient power to support their applications. An Emerson Power survey projects that 96% of all data centers will not have suffi- cient power by 2011. The New Data Center 1
  • 21. Chapter 1: Supply and Demand The conventional approach to data center design and operations has endured beyond its usefulness primarily due to a departmental silo effect common to many business operations. A data center adminis- trator, for example, could specify the near-term requirements for power distribution for IT equipment but because the utility bill was often paid for by the company's facilities management, the administrator would be unaware of continually increasing utility costs. Likewise, individual business units might deploy new rich content applications resulting in a sudden spike in storage requirements and additional load placed on the messaging network, with no proactive notification of the data cen- ter and network operators. In addition, the technical evolution of data center design, cooling tech- nology, and power distribution has lagged far behind the rapid development of server platforms, networks, storage technology, and applications. Twenty-first century technology now resides in twentieth century facilities that are proving too inflexible to meet the needs of the new data processing paradigm. Consequently, many IT managers are looking for ways to align the data center infrastructure to the new realities of space, power, and budget constraints. Although data centers have existed for over 50 years, guidelines for data center design were not codified into standards until 2005. The ANSI/TIA-942 Telecommunications Infrastructure Standard for Data Centers focuses primarily on cable plant design but also includes power distribution, cooling, and facilities layout. TIA-942 defines four basic tiers for data center classification, characterized chiefly by the degree of availability each provides: • Tier 1. Basic data center with no redundancy • Tier 2. Redundant components but single distribution path • Tier 3. Concurrently maintainable with multiple distribution paths and one active • Tier 4. Fault tolerant with multiple active distribution paths A Tier 4 data center is obviously the most expensive to build and main- tain but fault tolerance is now essential for most data center implementations. Loss of data access is loss of business and few com- panies can afford to risk unplanned outages that disrupt customers and revenue streams. A “five-nines” (99.999%) availability that allows for only 5.26 minutes of data center downtime annually requires redundant electrical, UPS, mechanical, and generator systems. Dupli- cation of power and cooling sources, cabling, network ports, and storage, however, both doubles the cost of the data center infrastruc- 2 The New Data Center
  • 22. ture and the recurring monthly cost of energy. Without new means to reduce the amount of space, cooling, and power while maintaining high data availability, the classic data center architecture is not sustainable. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM Backbone Telecom room cabling Main Office & Operations Distribution Area Center LAN Switches Routers, backbone LAN/SAN/KVM Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling LAN/SAN/KVM Switches Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Equipment Equipment Equipment Distribution Area Distribution Area Distribution Area Rack / Cabinets Rack / Cabinets Rack / Cabinets Figure 1. The ANSI/TIA-942 standard functional area connectivity. As shown in Figure 1, the TIA-942 standard defines the main func- tional areas and interconnecting cable plant for the data center. Horizontal distribution is typically subfloor for older raised-floor data centers or ceiling rack drop for newer facilities. The definition of pri- mary functional areas is meant to rationalize the cable plant and equipment placement so that space is used more efficiently and ongo- ing maintenance and troubleshooting can be minimized. As part of the mainframe legacy, many older data centers are victims of indiscrimi- nant cable runs, often strung reactively in response to an immediate need. The subfloors of older data centers can be clogged with aban- doned bus and tag cables, which are simply too long and too tangled to remove. This impedes airflow and makes it difficult to accommo- date new cable requirements. Note that the overview in Figure 1 does not depict the additional data center infrastructure required for UPS systems (primarily battery rooms), cooling plant, humidifiers, backup generators, fire suppres- sion equipment, and other facilities support systems. Although the support infrastructure represents a significant part of the data center investment, it is often over-provisioned for the actual operational power and cooling requirements of IT equipment. Even though it may The New Data Center 3
  • 23. Chapter 1: Supply and Demand be done in anticipation of future growth, over-provisioning is now a lux- ury that few data centers can afford. Properly sizing the computer room air conditioning (CRAC) to the proven cooling requirement is one of the first steps in getting data center power costs under control. Entrance Room Offices Carriers Carrier Equipment Carriers Operations Center and Demarcations Support Horizontal cabling Backbone cabling COMPUTER ROOM UPS Backbone Telecom room cabling Main Battery Office & Operations Distribution Area Routers, backbone LAN/SAN/KVM Switches Room Center LAN Switches PBX, M13 Muxes Horizontal Backbone Distribution Area cabling Backup LAN/SAN/KVM Switches Generators Horizontal Horizontal Zone Distribution Area Distribution Area LAN/SAN/KVM Switches LAN/SAN/KVM Switches Distribution Area Horizontal cabling Diesel Equipment Equipment Equipment Fuel Distribution Area Distribution Area Distribution Area Reserves Rack / Cabinets Rack / Cabinets Rack / Cabinets Power Distribution Cooling Fire Suppression Computer Room CRAC Towers System Air Conditioners (CRAC) Conduits Figure 2. The support infrastructure adds substantial cost and energy overhead to the data center. The diagram in Figure 2 shows the basic functional areas for IT pro- cessing supplemented by the key data center support systems required for high availability data access. Each unit of powered equip- ment has a multiplier effect on total energy draw. First, each data center element consumes electricity according to its specific load requirements, typically on a 7x24 basis. Second, each unit dissipates heat as a natural by-product of its operation, and heat removal and cooling requires additional energy draw in the form of the computer room air conditioning system. The CRAC system itself generates heat, which also requires cooling. Depending on the design, the CRAC sys- tem may require auxiliary equipment such as cooling towers, pumps, and so on, which draw additional power. Because electronic equip- ment is sensitive to ambient humidity, each element also places an additional load on the humidity control system. And finally, each ele- 4 The New Data Center
  • 24. ment requires UPS support for continuous operation in the event of a power failure. Even in standby mode, the UPS draws power for monitor- ing controls, charging batteries, and fly-wheel operation. Air conditioning and air flow systems typically represent about 37% of a data center's power bill. Although these systems are essential for IT operations, they are often over-provisioned in older data centers and the original air flow strategy may not work efficiently for rack-mount open systems infrastructure. For an operational data center, however, retrofitting or redesigning air conditioning and flow during production may not be feasible. For large data centers in particular, the steady accumulation of more servers, network infrastructure, and storage elements and their accompanying impact on space, cooling, and energy capabilities high- lights the shortcomings of conventional data center design. Additional space simply may not be available, the air flow inadequate for suffi- cient cooling, and utility-supplied power already at their maximum. And yet the escalating requirements for more applications, more data stor- age, faster performance, and higher availability continue unabated. Resolving this contradiction between supply and demand requires much closer attention to both the IT infrastructure and the data center architecture as elements of a common ecosystem. As long as energy was relatively inexpensive, companies tended to simply buy additional floor space and cooling to deal with increasing IT processing demands. Little attention was paid to the efficiency of elec- trical distribution systems or the IT equipment they serviced. With energy now at a premium, maximizing utilization of available power by increasing energy efficiency is essential. Industry organizations have developed new metrics for calculating the energy efficiency of data centers and providing guidance for data cen- ter design and operations. The Uptime Institute, for example, has formulated a Site Infrastructure Energy Efficiency Ratio (SI-EER) to analyze the relationship between total power supplied to the data cen- ter and the power that is supplied specifically to operate IT equipment. The total facilities power input divided by the IT equipment power draw highlights the energy losses due to power conversion, heating/cooling, inefficient hardware, and other contributors. A SI-EER of 2 would indi- cate that for every 2 watts of energy input at the data center meter, only 1 watt is drives IT equipment. By the Uptime Institute's own mem- ber surveys, a SI-EER of 2.5 is not uncommon. The New Data Center 5
  • 25. Chapter 1: Supply and Demand Likewise, The Green Grid, a global consortium of IT companies and professionals seeking to improve energy efficiency in data centers and business computing ecosystems, has proposed a Data Center Infra- structure Efficiency (DCiE) ratio that divides the IT equipment power draw by the total data center facility power. This is essentially the recip- rocal of SI-EER, yielding a fractional ratio between the facilities power supplied and the actual power draw for IT processing. With DCiE or SI- EER, however, it is not possible to achieve a 1:1 ratio that would enable every watt supplied to the data center to be productively used for IT processing. Cooling, air flow, humidity control, fire suppression, power distribution losses, backup power, lighting, and other factors inevitably consume power. These supporting elements, however, can be managed so that productive utilization of facilities power is increased and IT processing itself is made more efficient via new tech- nologies and better product design. Although SI-EER and DCiE are useful tools for a top-down analysis of data center efficiency, it is difficult to support these high-level metrics with real substantiating data. It is not sufficient, for example, to simply use the manufacturer's stated power figures for specific equipment, especially since manufacturer power ratings are often based on pro- jected peak usage and not normal operations. In addition, stated ratings cannot account for hidden inefficiencies (for example, failure to use blanking panels in 19" racks) that periodically increase the overall power draw depending on ambient conditions. The alternative is to meter major data center components to establish baselines of opera- tional power consumption. Although it may be feasible to design in metering for a new data center deployment, it is more difficult for exist- ing environments. The ideal solution is for facilities and IT equipment to have embedded power metering capability that can be solicited via network management frameworks. 6 The New Data Center
  • 26. High-level SI-EER and DCiE metrics focus on data center energy effi- ciency to power IT equipment. Unfortunately, this does not provide information on the energy efficiency or productivity of the IT equipment itself. Suppose that there were two data centers with equivalent IT pro- ductivity, the one drawing 50 megawatts of power to drive 25 megawatts of IT equipment would have the same DCiE as a data cen- ter drawing 10 megawatts to drive 5 megawatts of IT equipment. The IT equipment energy efficiency delta could be due to a number of dif- ferent technology choices, including server virtualization, more efficient power supplies and hardware design, data deduplication, tiered storage, storage virtualization, or other elements. The practical usefulness of high-level metrics is therefore dependent on underlying opportunities to increase energy efficiency in individual products and IT systems. Having a tighter ratio between facilities power input and IT output is good, but lowering the overall input number is much better. Data center energy efficiency has external implications as well. Cur- rently, data centers in the US alone require the equivalent of more than 6 x 1000 megawatt power plants at a cost of approximately $3B annually. Although that represents less than 2% of US power consump- tion, it is still a significant and growing number. Global data center power usage is more than twice the US figure. Given that all modern commerce and information exchange is based ultimately on digitized data, the social cost in terms of energy consumption for IT processing is relatively modest. In addition, the spread of digital information and commerce has already provided environmentally friendly benefits in terms of electronic transactions for banking and finance, e-commerce for both retail and wholesale channels, remote online employment, electronic information retrieval, and other systems that have increased productivity and reduced the requirement for brick-and-mortar onsite commercial transactions. Data center managers, however, have little opportunity to bask in the glow of external efficiencies especially when energy costs continue to climb and energy sourcing becomes problematic. Although $3B may be a bargain for modern US society as a whole, achieving higher levels of data center efficiency is now a prerequisite for meeting the contin- ued expansion of IT processing requirements. More applications and more data means either more hardware and energy draw or the adop- tion of new data center technologies and practices that can achieve much more with far less. The New Data Center 7
  • 27. Chapter 1: Supply and Demand What differentiates the new data center architecture from the old may not be obvious at first glance. There are, after all, still endless racks of blinking lights, cabling, network infrastructure, storage arrays, and other familiar systems and a certain chill in the air. The differences are found in the types of technologies deployed and the real estate required to house them. As we will see in subsequent chapters, the new data center is an increasingly virtualized environment. The static relationships between clients, applications, and data characteristic of conventional IT pro- cessing are being replaced with more flexible and mobile relationships that enables IT resources to be dynamically allocated when and where they are needed most. The enabling infrastructure in the form of vir- tual servers, virtual fabrics, and virtual storage has the added benefit of reducing the physical footprint of IT and its accompanying energy consumption. The new data center architecture thus reconciles the conflict between supply and demand by requiring less energy while supplying higher levels of IT productivity. 8 The New Data Center
  • 28. Running Hot and Cold 2 Taking the heat Dissipating the heat generated by IT equipment is a persistent prob- lem for data center operations. Cooling systems alone can account for one third to one half of data center energy consumption. Over-provi- sioning the thermal plant to accommodate current and future requirements leads to higher operational costs. Under-provisioning the thermal plant to reduce costs can negatively impact IT equipment, increase the risk of equipment outages, and disrupt ongoing business operations. Resolving heat generation issues therefore requires a multi-pronged approach to address (1) the source of heat from IT equipment, (2) the amount and type of cooling plant infrastructure required, and (3) the efficiency of air flow around equipment on the data center floor to remove heat. Energy, Power, and Heat In common usage, energy is the capacity of a physical system to do work and is expressed in standardized units of joules (the work done by a force of one newton moving one meter along the line of direction of the force). Power, by contrast, is the rate at which energy is expended over time, with one watt of power equal to one joule of energy per second. The power of a 100-watt light bulb, for example, is equivalent to 100 joules of energy per second, and the amount of energy consumed by the bulb over an hour would be 6000 joules. Because electrical systems often consume thousands of watts, the amount of energy consumed is expressed in kilowatt hours (kWh), and in fact the kilowatt hour is the preferred unit used by power companies for billing purposes. A system that requires 10,000 watts of power would thus consume and be billed for 10 kWh of energy for each hour of operation, or 240 kWh per day, or 87,600 kWh per year. The typical American household consumes 10,656 kWh per year. The New Data Center 9
  • 29. Chapter 2: Running Hot and Cold Medium and large IT hardware products are typically in the 1000+ watt range. Fibre Channel directors, for example, can be as efficient as 1300 watts (Brocade) to more than 3000 watts (competition). A large storage array can be in the 6400 watt range. Although low-end servers may be rated at ~200 watts, higher-end enterprise servers can be as much as 8000 watts. With the high population of servers and the req- uisite storage infrastructure to support them in the data center, plus the typical 2x factor for the cooling plant energy draw, it is not difficult to understand why data center power bills keep escalating. According to the Environmental Protection Agency (EPA), data centers in the US collectively consume the energy equivalent of approximately 6 million households, or about 61 billion kWh per year. Energy consumption generates heat. While energy consumption is expressed in watts, heat dissipation is expressed in BTU (British Ther- mal Units) per hour (h). One watt is approximately 3.4 BTU/h. Because BTUs quickly add up to tens or hundreds of thousands per hour in complex systems, heat can also be expressed in therms, with one therm equal to 100,000 BTU. Your household heating bill, for example, is often listed as therms averaged per day or billing period. Environmental Parameters Because data centers are closed environments, ambient temperature and humidity must also be considered. ASHRAE Thermal Guidelines for Data Processing Environments provides best practices for main- taining proper ambient conditions for operating IT equipment within data centers. Data centers typically run fairly cool at about 68 degrees Fahrenheit and 50% relative humidity. While legacy mainframe sys- tems did require considerable cooling to remain within operational norms, open systems IT equipment is less demanding. Consequently, there has been a more recent trend to run data centers at higher ambient temperatures, sometimes disturbingly referred to as “Speedo” mode data center operation. Although ASHRAE's guidelines present fairly broad allowable ranges of operation (50 to 90 degrees, 20 to 80% relative humidity), recommended ranges are still somewhat narrow (68 to 77 degrees, 40 to 55% relative humidity). 10 The New Data Center
  • 30. Rationalizing IT Equipment Distribution Rationalizing IT Equipment Distribution Servers and network equipment are typically configured in standard 19" (wide) racks and rack enclosures, in turn, are arranged for accessi- bility for cabling and servicing. Increasingly, however, the floor plan for data center equipment distribution must also accommodate air flow for equipment cooling. This requires that individual units be mounted in a rack for consistent air flow direction (all exhaust to the rear or all exhaust to the front) and that the rows of racks be arranged to exhaust into a common space, called a hot aisle/cold aisle plan, as shown in Figure 3. Cold aisle Equipment row Hot aisle Equipment row Air flow Cold aisle Equipment row Hot aisle Figure 3. Hot aisle/cold aisle equipment floor plan. A hot aisle/cold aisle floor plan provides greater cooling efficiency by directing cold to hot air flow for each equipment row into a common aisle. Each cold aisle feeds cool air for two equipment rows while each hot aisle allows exhaust for two equipment rows, thus enabling maxi- mum benefit for the hot/cold circulation infrastructure. Even greater efficiency is achieved by deploying equipment with variable-speed fans. The New Data Center 11
  • 31. Chapter 2: Running Hot and Cold More even cooling Equipment at bottom is cooler Server rack with constant speed fans Server rack with variable speed fans Figure 4. Variable speed fans enable more efficient distribution of cooling. Variable speed fans increase or decrease their spin rate in response to changes in equipment temperature. As shown in Figure 4, cold air flow into equipment racks with constant speed fans favors the hardware mounted in the lower equipment slots and thus nearer to the cold air feed. Equipment mounted in the upper slots is heated by their own power draw as well as the heat exhaust from the lower tiers. Use of variable speed fans, by contrast, enables each unit to selectively apply cooling as needed, with more even utilization of cooling throughout the equipment rack. Research done by Michael Patterson and Annabelle Pratt of Intel lever- ages the hot aisle/cold aisle floor plan approach to create a metric for measuring energy consumption of IT equipment. By convention, the energy consumption of a unit of IT hardware can be measured physi- cally via use of metering equipment or approximated via use of the manufacturer's stated power rating (in watts or BTUs). As shown in Figure 5 Patterson and Pratt incorporate both the energy draw of the equipment mounted within a rack and the associated hot aisle/cold aisle real estate required to cool the entire rack. This “work cell” u nit thus provides a more accurate description of what is actually required to power and cool IT equipment and, supposing the equip- ment (for example, servers) is uniform across a row, provides a useful multiplier for calculating total energy consumption of an entire row of mounted hardware. 12 The New Data Center
  • 32. Rationalizing IT Equipment Distribution Work cell Cold aisle Equipment racks Hot aisle Figure 5. The concept of work cell incorporates both equipment power draw and requisite cooling. When energy was plentiful and cheap, it was often easy to overlook the basic best practices for data center hardware deployment and the sim- ple remedies to correct inefficient air flow. Blanking plates, for example, are used to cover unused rack or cabinet slots and thus enforce more efficient airflow within an individual rack. Blanking plates, however, are often ignored, especially when equipment is fre- quently moved or upgraded. Likewise, it is not uncommon to find decommissioned equipment still racked up (and sometimes actually powered on). Racked but unused equipment can disrupt air flow within a cabinet and become a heat trap for heat generated by active hard- ware. In raised floor data centers, decommissioned cabling can disrupt cold air circulation and unsealed cable cutouts can result in continuous and fruitless loss of cooling. Because the cooling plant itself represents such a significant share of data center energy use, even seemingly minor issues can quickly add up to major inefficien- cies and higher energy bills. The New Data Center 13
  • 33. Chapter 2: Running Hot and Cold Economizers Traditionally, data center cooling has been provided by large air condi- tioning systems (computer room air conditioning, or CRAC) that used CFC (chlorofluorocarbon) or HCFC (hydrochlorofluorocarbon) refriger- ants. Since both CFCs and HCFCs are ozone depleting, current systems use ozone-friendly refrigerants to minimize broader environ- mental impact. Conventional CRAC systems, however, consume significant amounts of energy and may account for nearly half of a data center power bill. In addition, these systems are typically over-pro- visioned to accommodate data center growth and consequently incur a higher operational expense than is justified for the required cooling capacity. For new data centers in temperate or colder latitudes, economizers can provide part or all of the cooling requirement. Economizer technol- ogy dates to the mid-1800s but has seen a revival in response to rising energy costs. As shown in Figure 6, an economizer (in this case, a dry- side economizer) is essentially a heat exchanger that leverages cooler outside ambient air temperature to cool the equipment racks. Humidifier/ dehumidifier Damper Particulate filter Outside air Air return Figure 6. An economizer uses the lower ambient temperature of out- side air to provide cooling. Use of outside air has its inherent problems. Data center equipment is sensitive to particulates that can build up on circuit boards and con- tribute to heating issues. An economizer may therefore incorporate particulate filters to scrub the external air before the air flow enters the data center. In addition, external air may be too humid or too dry for data center use. Integrated humidifiers and dehumidifiers can condi- tion the air flow to meet operational specifications for data center use. As stated above, ASHRAE recommends 40 to 55% relative humidity. 14 The New Data Center
  • 34. Monitoring the Data Center Environment Dry-side economizers depend on the external air supply temperature to be sufficiently lower than the data center itself, and this may fluctu- ate seasonally. Wet-side economizers thus include cooling towers as part of the design to further condition the air supply for data center use. Cooling towers present their own complications, which are tough, especially in more arid geographies where water resources are expen- sive and scarce. Ideally, economizers should leverage as much recyclable resources as possible to accomplish the task of cooling while reducing any collateral environmental impact. Monitoring the Data Center Environment Because vendor wattage and BTU specifications may assume maxi- mum load conditions, using data sheet specifications or equipment label declarations does not provide an accurate basis for calculating equipment power draw or heat dissipation. An objective multi-point monitoring system for measuring heat and humidity throughout the data center is really the only means to observe and proactively respond to changes in the environment. A number of monitoring options are available today. For example, some vendors are incorporating temperature probes into their equip- ment design to provide continuous reporting of heat levels via management software. Some solutions provide rack-mountable sys- tems that include both temperature and humidity probes and monitoring through a Web interface. Fujitsu offers a fiber optic system that leverages the affect of temperature on light propagation to pro- vide a multi-point probe using a single fiber optic cable strung throughout equipment racks. Accuracy is reported to be within a half degree Celsius and within 1 meter of the measuring point. In addition, new monitoring software products can render a three-dimensional view of temperature distribution across the entire data center, analo- gous to an infrared photo of a heat source. Although monitoring systems add cost to data center design, they are invaluable diagnostic tools for fine-tuning airflow and equipment placement to maximize cooling and keeping power and cooling costs to a minimum. Many monitoring systems can be retrofitted to existing data center plants so that even older sites can leverage new technologies. The New Data Center 15
  • 35. Chapter 2: Running Hot and Cold 16 The New Data Center
  • 36. Doing More with Less 3 Leveraging virtualization and blade server technologies Of the three primary components of an IT data center infrastructure— servers, storage and network—servers are by far the most populous and have the highest energy impact. Servers represent approximately half of the IT equipment energy cost and about a quarter of the total data center power bill. Server technology has therefore been a prime candidate for regulation via EPA Energy Star and other market-driven initiatives and has undergone a transformation in both hardware and software. Server virtualization and blade server design, for example, are distinct technologies fulfilling different goals but together have a multiplying affect on server processing performance and energy effi- ciency. In addition, multi-core processors and multi-processor motherboards have dramatically increased server processing power in a more compact footprint. VMs Reborn The concept of virtual machines dates back to mainframe days. To maximize the benefit of mainframe processing, a single physical sys- tem was logically partitioned into independent virtual machines. Each VM ran its own operating system and applications in isolation although the processor and peripherals could be shared. In today's usage, VMs typically run on open systems servers and although direct-connect storage is possible, shared storage on a SAN or NAS is the norm. Unlike previous mainframe implementations, today's virtualization software can support dozens of VMs on a single physical server. Typi- cally, 10 or fewer VM instances are run per physical platform although more powerful server platforms can support 20 or more VMs. The New Data Center 17
  • 37. Chapter 3: Doing More with Less The benefits of server virtualization are as obvious as the potential risks. Running 10 VMs on a single server platform eliminates the need for 9 additional servers with their associated cost, components, and accompanying power draw and heat dissipation. For data centers with hundreds or thousands of servers, virtualization offers an immediate solution for server sprawl and ever increasing costs. Like any virtualization strategy, however, the logical separation of VMs must be maintained and access to server memory and external peripherals negotiated to prevent conflicts or errors. VMs on a single platform are hosted by a hypervisor layer which runs either directly (Type 1 or native) on the server hardware or on top of (Type 2 or hosted) the conventional operating system already running on the server hardware. Application Application Application Service console OS OS OS Hypervisor Hardware CPU Memory NIC Storage I/O Figure 7. A native or Type 1 hypervisor. In a native Type 1 virtualization implementation, the hypervisor runs directly on the server hardware as shown in Figure 7. This type of hypervisor must therefore support all CPU, memory, network and stor- age I/O traffic directly without the assistance of an underlying operating system. The hypervisor is consequently written to a specific CPU architecture (for open systems, typically an Intel x86 design) and associated I/O. Clearly, one of the benefits of native hypervisors is that overall latency can be minimized as individual VMs perform the normal functions required by their applications. With the hypervisor directly managing hardware resources, it is also less vulnerable over time to code changes or updates that might be required if an underlying OS were used. 18 The New Data Center
  • 38. VMs Reborn Application Application Application Application OS OS OS OS Hypervisor Host Operating System Hardware CPU Memory NIC Storage I/O Figure 8. A hosted or Type 2 hypervisor. As shown in Figure 8, a hosted or Type 2 server virtualization solution is installed on top of the host operating system. The advantage of this approach is that virtualization can be implemented on existing servers to more fully leverage existing processing power and support more applications in the same footprint. Given that the host OS and hypervi- sor layer inserts additional steps between the VMs and the lower level hardware, this hosted implementation incurs more latency than native hypervisors. On the other hand, hosted hypervisors can readily support applications with moderate performance requirements and still achieve the objective of consolidating compute resources. In both native and hosted hypervisor environments, the hypervisor oversees the creation and activity of its VMs to ensure that each VM has its requisite resources and does not interfere with the activity of other VMs. Without the proper management of shared memory tables by the hypervisor, for example, one VM instance could easily crash another. The hypervisor must also manage the software traps created to intercept hardware calls made by the guest OS and provide the appropriate emulation of normal OS hardware access and I/O. Because the hypervisor is now managing multiple virtual computers, secure access to the hypervisor itself must be maintained. Efforts to standardize server virtualization management for stable and secure operation are being led by the Distributed Management Task Force (DMTF) through its Virtualization Management Initiative (VMAN) and through collaborative efforts by virtualization vendors and partner companies. The New Data Center 19
  • 39. Chapter 3: Doing More with Less Server virtualization software is now available for a variety of CPUs, hardware platforms and operating systems. Adoption for mid-tier, mod- erate performance applications has been enabled by the availability of economical dual-core CPUs and commodity rack-mount servers. High- performance requirements can be met with multi-CPU platforms opti- mized for shared processing. Although server virtualization has steadily been gaining ground in large data centers, there has been some reluctance to commit the most mission-critical applications to VM implementations. Consequently, mid-tier applications have been first in line and as these deployments become more pervasive and proven, mission-critical applications will follow. In addition to providing a viable means to consolidate server hardware and reduce energy costs, server virtualization enables a degree of mobility unachievable via conventional server management. Because the virtual machine is now detached from the underlying physical pro- cessing, memory, and I/O hardware, it is now possible to migrate a virtual machine from one hardware platform to another non-disrup- tively. If, for example, an application's performance is beginning to exceed the capabilities of its shared physical host, it can be migrated onto a less busy host or one that supports faster CPUs and I/O. This application agility that initially was just an unintended by-product of migrating virtual machines has become one of the compelling reasons to invest in a virtual server solution. With ever-changing business, workload and application priorities, the ability to quickly shift process- ing resources where most needed is a competitive business advantage. As discussed in more detail below, virtual machine mobility creates new opportunities for automating application distribution within the virtual server pool and implementing policy-based procedures to enforce priority handling of select applications over others. Communi- cation between the virtualization manager and the fabric via APIs, for example, enable proactive response to potential traffic congestion or changes in the state of the network infrastructure. This further simpli- fies management of application resources and ensures higher availability. 20 The New Data Center
  • 40. Blade Server Architecture Blade Server Architecture Server consolidation in the new data center can also be achieved by deploying blade server frames. The successful development of blade server architecture has been dependent on the steady increase in CPU processing power and solving basic problems around shared power, cooling, memory, network, storage, and I/O resources. Although blade servers are commonly associated with server virtualization, these are distinct technologies that have a multiplying benefit when combined. Blade server design strips away all but the most essential dedicated components from the motherboard and provides shared assets as either auxiliary special function blades or as part of the blade chassis hardware. Consequently, the power consumption of each blade server is dramatically reduced while power supply, fans and other elements are shared with greater efficiency. A standard data center rack, for example, can accommodate 42 1U conventional rack-mount servers, but 128 or more blade servers in the same space. A single rack of blade servers can therefore house the equivalent of 3 racks of conven- tional servers; and although the cooling requirement for a fully populated blade server rack may be greater than for a conventional server rack, it is still less than the equivalent 3 racks that would other- wise be required. As shown in Figure 9, a blade server architecture offloads all compo- nents that can be supplied by the chassis or by supporting specialized blades. The blade server itself is reduced to one or more CPUs and requisite auxiliary logic. The degree of component offload and avail- ability of specialized blades varies from vendor to vendor, but the net result is essentially the same. More processing power can now be packed into a much smaller space and compute resources can be managed more efficiently. Brocade Access Gateway Power Power CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic CPU / AUX logic Fan supply Network I/O CPU supply Memory Fans Network Memory I/O Bus AUX Storage Bus External SAN storage Figure 9. A blade server architecture centralizes shared resources while reducing individual blade server elements. The New Data Center 21
  • 41. Chapter 3: Doing More with Less By significantly reducing the number of discrete components per pro- cessing unit, the blade server architecture achieves higher efficiencies in manufacturing, reduced consumption of resources, streamlined design and reduced overall costs of provisioning and administration. The unique value-add of each vendor's offering may leverage hot-swap capability, variable-speed fans, variable-speed CPUs, shared memory blades and consolidated network access. Brocade has long worked with the major blade server manufacturers to provide optimized Access Gateway and switch blades to centralize storage network capa- bility and the specific features of these products will be discussed in the next section. Although consolidation ratios of 3:1 are impressive, much higher server consolidation is achieved when blade servers are combined with server virtualization software. A fully populated data center rack of 128 blade servers, for example, could support 10 or more virtual machines per blade for a total of 1280 virtual servers. That would be the equivalent of 30 racks (at 42 servers per rack) of conventional 1U rack-mount servers running one OS instance per server. From an energy savings standpoint, that represents the elimination of over 1000 power supplies, fan units, network adapters, and other elements that contribute to higher data center power bills and cooling load. As a 2009 survey by blade.org shows, adoption of blade server tech- nology has been increasing in both large data centers and small/ medium business (SMB) environments. Slightly less than half of the data center respondents and approximately a third of SMB operations have already implemented blade servers and over a third in both cate- gories have deployment plans in place. With limited data center real estate and increasing power costs squeezing data center budgets, the combination of blade servers and server virtualization is fairly easy to justify. Brocade Server Virtualization Solutions Whether on standalone servers or blade server frames, implementing server virtualization has both upstream (client) and downstream (stor- age) impact in the data center. Because Brocade offers a full spectrum of products spanning LAN, WAN and SAN, it can help ensure that a server virtualization deployment proactively addresses the new requirements of both client and storage access. The value of a server virtualization solution is thus amplified when combined with Brocade's network technology. 22 The New Data Center
  • 42. Brocade Server Virtualization Solutions To maximize the benefits of network connectivity in a virtualized server environment, Brocade has worked with the major server virtualization solutions and managers to deliver high performance, high availability, security, energy efficiency, and streamlined management end to end. The following Brocade solutions can enhance a server virtualization deployment and help eliminate potential bottlenecks: Brocade High-Performance 8 Gbps HBAs In a conventional server, a host bus adapter (HBA) provides storage access for a single operating system and its applications. In a virtual server configuration, the HBA may be supporting 10 to 20 OS instances, each running its own application. High performance is therefore essential for enabling multiple virtual machines to share HBA ports without congestion. The Brocade 815 (single port) and 825 HBAs (dual port, shown in Figure 10) provide 8 Gbps bandwidth and 500,000 I/Os per second (IOPS) performance per port to ensure the maximum throughput for shared virtualized connectivity. Brocade N_Port Trunking enables the 825 to deliver an unprecedented 16 Gbps bandwidth (3200 MBps) and one million IOPS performance. This exceptional performance helps ensure that server virtualization con- figurations can expand over time to accommodate additional virtual machines without impacting the continuous operation of existing applications. Figure 10. The Brocade 825 8 Gbps HBA supports N_Port Trunking for an aggregate 16 Gbps bandwidth and 1000 IOPS. The New Data Center 23
  • 43. Chapter 3: Doing More with Less The Brocade 815 and 825 HBAs are further optimized for server virtu- alization connectivity by supporting advanced intelligent services that enable end-to-end visibility and management. As discussed below, Brocade virtual machine SAN boot, N_Port ID Virtualization (NPIV) and integrated Quality of Service (QoS) provide powerful tools for simplify- ing virtual machine deployments and providing proactive alerts directly to server virtualization managers. Brocade 8 Gbps Switch and Director Ports In virtual server environments, the need for speed does not end at the network or storage port. Because more traffic is now traversing fewer physical links, building high-performance network infrastructures is a prerequisite for maintaining non-disruptive, high-performance virtual machine traffic flows. Brocade's support of 8 Gbps ports on both switch and enterprise-class platforms enables customers to build high- performance, non-blocking storage fabrics that can scale from small VM configurations to enterprise-class data center deployments. Designing high-performance fabrics ensures that applications running on virtual machines are not exposed to bandwidth issues and can accommodate high volume traffic patterns required for data backup and other applications. Brocade Virtual Machine SAN Boot For both standalone physical servers and blade server environments, the ability to boot from the storage network greatly simplifies virtual machine deployment and migration of VM instances from one server to another. As shown in Figure 11, SAN boot centralizes management of boot images and eliminates the need for local storage on each phys- ical server platform. When virtual machines are migrated from one hardware platform to another, the boot images can be readily accessed across the SAN via Brocade HBAs. 24 The New Data Center
  • 44. Brocade Server Virtualization Solutions ... ... Boot Servers images Brocade ... ... 825 HBAs Servers SAN switches Direct- attached storage (DAS) Storage arrays Boot images Figure 11. SAN boot centralizes management of boot images and facilitates migration of virtual machines between hosts. Brocade 815 and 825 HBAs provide the ability to automatically retrieve boot LUN parameters from a centralized fabric-based registry. This eliminates the error-prone manual host-based configuration scheme required by other HBA vendors. Brocade's SAN boot and boot LUN discovery facilitates migration of virtual machines from host to host, removes the need for local storage and improves reliability and performance. Brocade N_Port ID Virtualization for Workload Optimization In a virtual server environment, the individual virtual machine instances are unaware of physical ports since the underlying hardware has been abstracted by the hypervisor. This creates potential problems for identifying traffic flows from virtual machines through shared phys- ical ports. NPIV is an industry standard that enables multiple Fibre Channel addresses to share a single physical Fibre Channel port. In a server virtualization environment, NPIV allows each virtual machine instance to have a unique World Wide Name (WWN) or virtual HBA port. This in turn provides a level of granularity for identifying each VM attached to the fabric for end-to-end monitoring, accounting, and con- figuration. Because the WWN is now bound to an individual virtual machine, the WWN follows the VM when it is migrated to another plat- form. In addition, NPIV creates the linkage required for advanced services such as QoS, security, and zoning as discussed in the next section. The New Data Center 25
  • 45. Chapter 3: Doing More with Less Configuring Single Initiator/Target Zoning Brocade has been a pioneer in fabric-based zoning to segregate fabric traffic and restrict visibility of storage resources to only authorized hosts. As a recognized best practice for server to storage configura- tion, NPIV and single initiator/target zoning ensures that individual virtual machines have access only to their designated storage assets. This feature minimizes configuration errors during VM migration and extends the management visibility of fabric connections to specific vir- tual machines. Brocade End-to-End Quality of Service The combination of NPIV and zoning functionality on Brocade HBAs and switches provides the foundation for higher-level fabric services including end-to-end QoS. Because the traffic flows from each virtual machine can be identified by virtual WWN and segregated via zoning, each can be assigned a delivery priority (low, medium or high) that is enforced fabric-wide from the host connection to the storage port, as shown in Figure 12. QoS Priorities App 1 App 2 App 3 App 4 High Medium Low Virtual Channels technology enables QoS at the ASIC level in the HBA Default QoS HBA priority Frame-level interleaving of is Medium outbound data maximizes initiator link utilization Figure 12. Brocade's QoS enforces traffic prioritization from the server HBA to the storage port across the fabric. While some applications running on virtual machines are logical candi- dates for QoS prioritization (for example, SQL Server), Brocade's Top Talkers management feature can help identify which VM applications may require priority treatment. Because Brocade end-to-end QoS is ulti- mately tied to the virtual machine's virtualized WWN address, the QoS assignment follows the VM if it is migrated from one hardware platform 26 The New Data Center
  • 46. Brocade Server Virtualization Solutions to another. This feature ensures that applications enjoy non-disruptive data access despite adds/moves and changes to the downstream envi- ronment and enables administrators to more easily fulfill client service- level agreements (SLAs). Brocade LAN and SAN Security Most companies are now subject to government regulations that man- date the protection and security of customer data transactions. Planning a virtualization deployment must therefore also account for basic secu- rity mechanisms for both client and storage access. Brocade offers a broad spectrum of security solutions, including LAN and WAN-based technologies and storage-specific SAN security features. For example, Brocade SecureIron products, shown in Figure 13, provide firewall traffic management and LAN security to safeguard access from clients to vir- tual hosts on the IP network. Figure 13. Brocade SecureIron switches provide firewall traffic man- agement and LAN security for client access to virtual server clusters. Brocade SAN security features include authentication via access control lists (ACLs) and role-based access control (RBAC) as well as security mechanisms for authenticating connectivity of switch ports and devices to fabrics. In addition, the Brocade Encryption Switch, shown in Figure 14, and FS8-18 Encryption Blade for the Brocade DCX Backbone platform provide high-performance (96 Gbps) data encryption for data- at-rest. Brocade's security environment thus protects data-in-flight from client to virtual host as well as data written to disk across the SAN. Figure 14. The Brocade Encryption Switch provides high-performance data encryption to safeguard data written to disk or tape. The New Data Center 27
  • 47. Chapter 3: Doing More with Less Brocade Access Gateway for Blade Frames Server virtualization software can be installed on conventional server platforms or blade server frames. Blade server form factors offer the highest density for consolidating IT processing in the data center and leverage shared resources across the backplane. To optimize storage access from blade server frames, Brocade has partnered with blade server providers to create high-performance, high-availability Access Gateway blades for Fibre Channel connectivity to the SAN. Brocade Access Gateway technology leverages NPIV to simplify virtual machine addressing and F_Port Trunking for high utilization and automatic link failover. By integrating SAN connectivity into a virtualized blade server chassis, Brocade helps to streamline deployment and simplify manage- ment while reducing overall costs. The Energy-Efficient Brocade DCX Backbone Platform for Consolidation With 4x the performance and over 10x the energy efficiency of other SAN directors, the Brocade DCX delivers the high performance required for virtual server implementation and can accommodate growth in VM environments in a compact footprint. The Brocade DCX supports 384 ports of 8 Gbps for a total of 3 Tbps chassis bandwidth. Ultra-high-speed inter-chassis links (ICLs) allow further expansion of the SAN core for scaling to meet the requirements of very large server virtualization deployments. The Brocade DCX is also designed to non-disruptively inte- grate Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) for future virtual server connectivity. The Brocade DCX is also available in a 192-port configuration (as the Brocade DCX-4S) to support medium VM configurations, while providing the same high availability, performance, and advanced SAN services. The Brocade DCX's Adaptive Networking services for QoS, ingress rate limiting, congestion detection, and management ensure that traffic streams from virtual machines are proactively managed throughout the fabric and accommodate the varying requirements of upper-layer busi- ness applications. Adaptive Networking services provide greater agility in managing application workloads as they migrate between physical servers. 28 The New Data Center
  • 48. Brocade Server Virtualization Solutions Enhanced and Secure Client Access with Brocade LAN Solutions Brocade offers a full line of sophisticated LAN switches and routers for Ethernet and IP traffic from Layer 2/3 to Layer 4–7 application switch- ing. This product suite is the natural complement to Brocade's robust SAN products and enables customers to build full-featured and secure networks end to end. As with the Brocade DCX architecture for SANs, Brocade BigIron RX, shown in Figure 15, and FastIron SuperX switches incorporate best-in-class functionality and low power consumption to deliver high-performance core switching for data center LAN backbones. Figure 15. Brocade BigIron RX platforms offer high-performance Layer 2/3 switching in three compact, energy-efficient form factors. Brocade edge switches with Power over Ethernet (PoE) support enable customers to integrate a wide variety of IP business applications, includ- ing voice over IP (VoIP), wireless access points, and security monitoring. Brocade SecureIron switches bring advanced security protection for cli- ent access into virtualized server clusters, while Brocade ServerIron switches provide Layer 4–7 application switching and load balancing. Brocade LAN solutions provide up to 10 Gbps throughput per port and so can accommodate the higher traffic loads typical of virtual machine environments. Brocade Industry Standard SMI-S Monitoring Virtual server deployments dramatically increase the number of data flows and requisite bandwidth per physical server or blade server. Because server virtualization platforms can support dynamic migration of application workloads between physical servers, complex traffic pat- terns are created and unexpected congestion can occur. This complicates server management and can impact performance and availability. Brocade can proactively address these issues by integrating communication between Brocade intelligent fabric services with VM The New Data Center 29