SlideShare a Scribd company logo
1 of 10
Download to read offline
HRG Assessment
Cisco UCS 5100 and IBM BladeCenter H


Introduction

Today’s new workloads require High Performance, High Availability, and rapid scaling capabilities. Two key concerns
are transactional latency and bottlenecking associated with fail over, migration, and the movement of a single Virtual
Machine (VM) or group of VMs from one processor core to another. IT professionals need to prepare for significant
change resulting from the rolling adoption of Virtualization, Cloud Computing, and Data Management software like
Hadoop, Memcached, and NoSql.

The requirement to scale systems to meet customer demand, retain and grow existing customer relationships, stay
competitive, and support new products is driving IT change. Continual pressure to reduce operational and capital
expenditures while improving IT quality of service is behind many current business and technology changes.

Cisco’s Unified Computing System (UCS) 5100 Blade System and IBM’s BladeCenter H are both easy to configure, easy
to scale, easy to manage, integrated blade server systems. However, there are important differences between the two
solutions. This assessment is based on publicly available information including marketing and sales materials, videos,
pod casts, and vendor briefings.

Summary
          •    The Cisco approach is an “appliance approach” where the management software (UCS Manager) and
               hardware (UCS 6120, 6140, and 6248 Fabric Interconnects) are sold as an integrated and inseparable
               package. Cisco embeds their UCS Manager software in the Cisco UCS Fabric Interconnect switches.
               Currently, the only way to get UCS Manager is to buy one of these switches. Cisco UCS Manager is a
               device manager that only manages Cisco Blades, Rack Mount Servers, and other UCS components.
          •    Cisco does not sell system level management and monitoring software instead relying on BMC, EMC, CA,
               IBM and others to fill this void.
          •    Cisco UCS Manager does not offer Predictive Failure Analysis. However, if a Cisco UCS B-series blade
               server is set up correctly and fails, the failure will initiate a VMotion to move workloads off of the failed
               blade to a healthy blade. However, from everything we have read it appears that this action will only be
               taken after a failure has occurred.
          •    Cisco provides a highly customizable set of XML APIs so that developers and system level software, tools,
               and utilities providers can integrate their offerings with Cisco UCS Manager. BMC software can work

Copyright © 2011 Harvard Research Group, Inc.
Harvard Research Group, Inc.



               through Cisco UCS Manager to stand up, provision, and manage UCS Blade and Rack mount servers as
               well as Virtual Machines on those servers.
          •    Cisco Blades and Rack mount servers are Intel only. Cisco UCS Manager only manages Cisco Blade and
               Rack mount servers.
          •    The Cisco UCS solution only offers converged Fibre Channel over Ethernet (FCoE) within the Cisco UCS
               5100 chassis and not native Fibre Channel.
          •    In addition to Intel based BladeCenter blades IBM also offers Power6 and Power7 based blades all of
               which are plug and play integrated within the BladeCenter H chassis.
          •    IBM Systems Director is a standalone software product, a rich device manager, a performance monitor,
               and a Predictive Failure Analysis (PFA) and alerting tool. IBM Systems Director works with all IBM
               servers and non-IBM X86 servers to stand up, provision and manage servers, as well as, the virtual
               machines on those servers.
          •    Due to the architectural differences between IBM’s Blade Center and Cisco’s UCS Blade System, it takes
               longer to evacuate VM’s from a failing UCS Blade or Rack Mount server to a healthy server than it would
               take with IBM BladeCenter H. This is because all blade-to-blade and chassis to chassis traffic within a
               Cisco UCS management domain is routed through the Fabric Interconnect top of rack switch.
          •    IBM offers system level software, tools, and utilities including IBM Systems Director and IBM Tivoli
               offerings for system monitoring, management, automation, management of Converged Data Center
               infrastructure, and integration across heterogeneous environments.
          •    IBM Blades and Rack mount servers are Intel, AMD, Power, and System z and all can be managed by IBM
               Systems Director and IBM Tivoli software offerings. In addition, IBM Systems Director can integrate and
               manage non-IBM x86 servers including Cisco UCS.


Cisco UCS
The Cisco UCS approach is an “appliance approach” where the software (UCS Manager) and hardware (UCS 6120,
6140, and 6248 Fabric Interconnects) are sold as an integrated and inseparable package. For a pure play Cisco IT shop,
this level of abstraction significantly reduces the time spent in software and hardware deliberation, selection, installation,
and implementation. Another benefit for Cisco customers is that they can take a simplified Lego-like approach to scale
out. If the IT infrastructure is heterogeneous, and based on open standards, customers should consider how a system
like the UCS Blade System will be integrated and managed as part of a heterogeneous data center environment.

The Cisco UCS Blade System represents a fixed physical architecture that could limit the ultimate flexibility of this
solution. This Cisco UCS 5108 chassis based solution requires 2 identical Top of Rack Fabric Interconnect switches
(the 20 port 6120, the 40 port 6140, or the 32 port 6248 Cisco Switches) in order to provide redundancy and a
reasonable degree of availability at the Layer 2 Fabric Interconnect Switch level.

The Cisco UCS 5100 Series Blade Server Chassis is 6 rack units (6RU) high. A 42U rack can fit 2 Fabric Interconnect
switches and 6 Cisco UCS Blade chassis. A chassis can accommodate 2 Fabric Extenders, the 2104 with 4 uplink ports,
or the 2208 with 8 uplink ports, and up to 8 half-width, or 4 full-width Cisco UCS B-Series Blade Servers. Cisco servers
are currently only available with Intel processors and UCS Manager only manages Cisco UCS certified hardware.

Each of the 2 FEX pass-through switches (they do not route traffic) in each Cisco UCS Blade chassis is connected to
the south side server ports on one of the Fabric Interconnects. Two Fabric Extenders (FEX) are required by Cisco for
availability and fail over purposes. Each FEX has either 4 or 8 north bound 10 Gb uplink ports depending on the
specific model. The same number of uplink ports on each FEX must be connected to the south side ports on each of


Copyright © 2011 Harvard Research Group, Inc                                                                        page 2
Harvard Research Group, Inc.



the Fabric Interconnect (FI) switches such that FEX A will connect only to FI A and FEX B will only connect to FI B
in order to preserve system availability, fail over capability and redundancy throughout the Fabric path.

The Cisco recommended UCS Blade system configuration calls for each FEX in a 5108 chassis to be connected to only
one Fabric Interconnect switch. For the UCS Blade system to deliver its maximum throughput, all uplink ports on each
FEX must be connected to one or the other Fabric Interconnect. A configuration with 2 UCS 6248 UP Fabric
Interconnects set up in an active/standby configuration and with 2 UCS 2208 FEX installed in each UCS 5108 chassis
could only support 4 5108 chassis or 32 half width B-Series blades in a maximum bandwidth configuration. Each 6248
Fabric Interconnect only has 32 ports available for connection to FEX if the optional expansion module ports are used
as uplink ports.

If a customer requires maximum scale out of capacity, an option is to use only one of the available uplinks on each
FEX. In such a configuration, 2 UCS 6248 Fabric Interconnects running in a passive/standby configuration could
support up to 32 UCS 5108 chassis or 256 half-width B series blades. This configuration requires the purchase on of an
optional expansion module to handle northbound traffic from the 6248. If a customer has a configuration like the one
just described, they run the risk of unacceptable levels of transactional latency, over subscription of ports, and
bottlenecking on the south side of the 6248.

In this scenario, each FEX connects to 8 UCS B series blade servers through the 5108 chassis mid-plane. Basically the
10 Gb converged FCoE Fabric is brought to each FEX through cables plugged into 8 of the south side ports on each
of the UCS Fabric Interconnects thereby extending the Cisco 10 Gb FCoE converged network fabric as well as the
UCS Manager bi-directional system level communications from the 6248 switch to the in-chassis 2208 FEX, and then
from there via the 5108 chassis mid-plane connecting to the B Series Blades.

Each of the Fabric Extenders within a UCS 5108 chassis connects to a different top of rack Fabric Interconnect to
ensure that there are redundant 10 Gb Cisco converged fabric paths in the event that either one of the FEX or one of
the Fabric Interconnect switches fails. The 2 Fabric Interconnect switches are run in active/standby configuration.
Only one Fabric Interconnect is actively switching traffic while the other is waiting in standby mode to take over in the
event of a failure. Customers may experience a slight delay in the event of a failover while the standby switch takes over
from the then failed switch. Additional delay may be experienced, as an actual failure has to occur before failover can
be implemented, which means any in-flight transactions may be delayed. This type of architecture, while advantageous
for availability, is not optimal when it comes to the potential for physical port and virtual port over subscription of
North bound resources by the VMs residing within the 5108 chassis.

Another consideration is that with only one Fabric Interconnect in active mode if you are running multiple high
transaction workloads on the B Series servers in the chassis you run the risk of increased latency at the Fabric
Interconnect. It is specifically for this reason that the Cisco UCS 5108 based solution is not a good fit for many high
performance and high transaction rate workloads nor is it appropriate for VM based high transaction rate workloads as,
in such a scenario, a customer could run the very real risk of over subscription and increased transactional latency
occurring simultaneously.

Customers should be aware that there is no native Fibre Channel connectivity available within the Cisco UCS 5108
chassis or within the rack that contains the chassis. However, there is Native Fibre Channel connectivity that is
available on the North side of the Fabric Interconnect switch if an appropriate expansion module is purchased. If a
customer is currently running native Fibre Channel for SAN connectivity from individual rack mount or blade servers
they will need to migrate to a converged 10 Gb FCoE (Fibre Channel over Ethernet) Cisco fabric. For customers
doing net new installations, this lack of true Fibre Channel connectivity may not pose a problem.




Copyright © 2011 Harvard Research Group, Inc                                                                    page 3
Harvard Research Group, Inc.



End Host Mode

UCS Fabric Interconnects are configured to initially “power up” into what Cisco calls “End-Host Mode” which does
not use Spanning Tree Protocol (STP) to make routing decisions. Running in End Host Mode, the Fabric Interconnect
does not consume CPU resources to do STP calculations nor send and receive Bridge Protocol Data Units (BPDU). If
a customer needs a standard Layer 2 STP switch, a software configuration change can be made after which the switch
needs to be rebooted before it can be used in standard Layer 2 switch mode running the Spanning Tree Protocol.

Even when running in End Host Mode, the Fabric Interconnect will use MAC (Media Access Control) address learning
and behave as a Layer 2 switch for local traffic. For one blade to communicate with any other blade in the same chassis
or another blade in another chassis in the same physical rack, that traffic has to be routed from the first blade up to the
Top of Rack FI switch and then from that switch, it is routed to the second blade. In this manner, the Cisco UCS
architecture introduces additional latency during blade-to-blade and chassis-to-chassis messaging.

Layer 2 switching behavior for all UCS systems below the Fabric Interconnect will result in increased latency when
compared to other blade systems that allow direct blade-to-blade communications within the same chassis or within the
same rack but on different chassis. This increased latency is of particular interest when moving VMs from one physical
server to another physical server as in the case of the evacuation of VMs from a failing server to a healthy server. In
this instance, any increased latency will have an impact on the level of service provided by VMs being migrated while
handling workloads and in-flight transactions.

The current version of Cisco UCS Manager, when configured to run in End Host Mode, allows the use of FCoE to
connect a storage array directly to the Fabric Interconnect. In this configuration, Cisco End Host Mode is used to pin a
north side Fabric Interconnect port to a VSAN (Virtual Storage Area Network) based storage array. However, with this
configuration, it will not be possible to perform either LUN (Logical Unit Number) masking nor do normal storage
Zoning as they would if the Fabric Interconnect switch were run in normal Layer 2 switch mode with STP enabled.
Configuring the Fabric Interconnect to run as a Layer 2 switch running STP could serve to exacerbate the effect of
transaction latency due to additional loading on the switch based CPU, as it would be required to generate, send, and
receive BPDUs.

One additional bandwidth related concern derives from the observation that the Cisco UCS architecture funnels and
aggregates all I/O transactional and management traffic through a single Top of Rack Fabric Interconnect switch. This
should be of interest to those customers planning on running a highly virtualized, memory intensive workload
environment with the number of VMs dynamically fluctuating as work load and capacity requirements ramp based on
business requirements. Our recommendation to customers considering such a solution is to do rigorous modeling of
workloads, capacity requirements, transaction and bandwidth requirements in order to avoid any potential Quality of
Service or Service Level Agreement surprises.

Uniformity

The uniformity of the available Cisco UCS B series blades are both a benefit and a limitation. The fact that you can
integrate Cisco C series rack mount servers into the same UCS management domain as Cisco B Series Blade Servers is a
real benefit for those workloads that require the performance and capacity of a Rack Mounted server. Regarding
uniformity, the Cisco UCS system is a Cisco only solution. This means that UCS Manager can only manage Cisco
Switches, Blades, Rack Mount servers, CNAs, and in chassis fabric extenders.

According to Cisco, UCS provides predictable levels of latency or predictable performance regardless of the physical
location of a workload or blade server as long as they are in the same rack because it takes a predictable amount of time
(latency) to be accessed through the Fabric Interconnect. Customers are advised to consider whether predictable
latency is appropriate for their workloads or if what they really need is reduced (low) latency.



Copyright © 2011 Harvard Research Group, Inc                                                                      page 4
Harvard Research Group, Inc.



B-Series M2 Blade servers

Currently Cisco offers the following blade servers for use in their Cisco UCS 5108 blade chassis.

          •    Cisco UCS B440 M2 – Intel® Xeon® based blade server
          •    Cisco UCS B250 M2 – Intel® Xeon® based blade server
          •    Cisco UCS B230 M2 – Intel® Xeon® based blade server
          •    Cisco UCS B200 M2 – Intel® Xeon® based blade server

Cisco UCS B-Series Blade Servers
    Model #         Processor      Cores Max   GHz Max   Sockets   DIMMs   Max Mem GB    Blades per
                                                                                        5108 chassis
UCS   B440    M2   Intel® Xeon         10       2.4        4        32        512            4
UCS   B250    M2   Intel® Xeon          6       3.46       2        48        384            4
UCS   B230    M2   Intel® Xeon         10       2.4        2        32        512            8
UCS   B200    M2   Intel® Xeon          6       3.46       2        12        192            8

Cisco UCS Manager

Cisco UCS Manager’s embedded device management software manages the software and hardware components of the
Cisco Unified Computing System ™ across multiple chassis and virtual machines through a Java based GUI, a CLI
(command-line interface), or an XML (Extensible Markup Language) Application Programming Interface (API).
Service Profiles in the UCS Manager application can be used to set up and configure stateless Intel Xeon based Cisco
Blades, Rack mount servers, and virtual machines. Service Profile settings can be ‘moved’ with a virtual machine when
it is moved using VMware’s VMotion in the case of a server failure or when reallocating capacity to satisfy changing
workload requirements.

The XML APIs for the UCS Manager application can be used by 3rd party management tools. Using these APIs Data
Center Management software from BMC, CA, EMC, and IBM can provision and decommission servers based on
demand. Currently, only BMC and EMC use these APIs to this extent. However, IBM Tivoli will soon have this
capability (currently in beta testing) allowing Cisco UCS compute pods or islands of computing to be integrated into a
broader, more heterogeneous, Converged Data Center environment.

Cisco UCS Manager Service Profiles are created by server, network, and storage administrators and stored on the UCS
Fabric Interconnect in an object based data store. Cisco UCS Manager discovers UCS devices that are added, moved,
or removed from the UCS system. This information, added to the UCS Manager’s inventory (a light weight CMDB), is
saved on the Fabric Interconnect switch. UCS Manager uses this information when deploying Service Profiles to newly
discovered resources. When a Service Profile is deployed UCS Manager configures the server, adapters, fabric
extenders, fabric interconnects, NICs, HBAs, LAN, and SAN switches. Service Profiles can also be used to enable
Virtual Network Link (VN-Link) capabilities for VN-Link supported hypervisors.

Cisco UCS supports the VMware ESX, ESXi, Microsoft Hyper-V, and KVM hypervisors. Cisco’s implementation of
VMware virtualization uses a UCS specific proprietary version of ESXi. This lets ESX and ESXi run directly on the
UCS system hardware, without additional software, providing hypervisor functionality to host guest operating systems
such as Windows or Linux on the physical server.

Cisco UCS Manager enables Fibre Channel over Ethernet in the UCS internal fabric and preserves traditional Ethernet
and Fibre Channel connectivity to LAN and SAN environments North of the Fabric Interconnect. However, there is
no true Fibre Channel connectivity South of the Fiber Interconnect and there is no true Fibre Channel connectivity
within the UCS Blade system chassis.



Copyright © 2011 Harvard Research Group, Inc                                                                       page 5
Harvard Research Group, Inc.



Cisco UCS Manager is a device or element management application that is only available from Cisco with the purchase
of a Cisco UCS Fabric Interconnect. UCS Manager handles hardware provisioning, configuration, and management but
only for UCS certified components such as Cisco B series blades and Cisco C series rack mount servers. UCS manages
these servers as stateless devices and uses XML to configure these stateless devices using UCS specific Service Profiles.

Cisco UCS Manager ecosystem partners include BMC, CA, Compuware, Dynamic Ops, EMC, HP, IBM, Microsoft,
SolarWinds, Symantec, VMware, and Zenoss. Those partners offering the tightest level of integration with Cisco’s UCS
environment are EMC, BMC, and soon IBM.


IBM BladeCenter H

IBM BladeCenter H is an open blade architecture product design focused on processor, memory, and I/O flexible
configuration and open to collaboration. This open architecture enables non-IBM companies to develop and build
compatible blades, networking and storage switches, and blade adapter cards for inclusion in the IBM BladeCenter by
utilizing the Blade Open Specification.

The IBM BladeCenter H chassis holds 14 blade servers integrating Power6, Power7, and Intel Blades all within the
same chassis as a single image compute resource. IBM currently offers 5 types of Blade chassis. Four IBM
BladeCenter H chassis comprising 56 blades with integrated Layer 2 switching will fit into 36U of rack space in an
industry standard 42U rack leaving additional room for storage.

IBM offers the following servers for use in their IBM BladeCenter H chassis.

          •    IBM BladeCenter HX5 – Intel® Xeon® based blade server
          •    IBM BladeCenter HS22V – Intel® Xeon® based blade server
          •    IBM BladeCenter HS22 – Intel® Xeon® based blade server
          •    IBM BladeCenter HS12 – Intel® Xeon® based blade server
          •    IBM BladeCenter PS704 Express – IBM POWER7™ based blade server
          •    IBM BladeCenter PS703 Express – IBM POWER7™ based blade server
          •    IBM BladeCenter PS702 Express– IBM POWER7™ based blade server
          •    IBM BladeCenter PS701 Express– IBM POWER7™ based blade server
          •    IBM BladeCenter PS700 Express– IBM POWER7™ based blade server
          •    IBM BladeCenter JS12 Express– IBM POWER6™ based blade server
          •    IBM BladeCenter QS22 – IBM PowerXCell™ 8i based blade server

IBM BladeCenter offers either integrated or pass thru switching in the chassis providing customers more flexibility
when making architectural decisions. IBM Blades and the BladeCenter H Chassis support VMware ESXi, Microsoft
Hyper-V, the open source KVM-based Red Hat RHEV-H and PowerVM virtualization hypervisors enabling data
center consolidation and high-density compute configurations.




Copyright © 2011 Harvard Research Group, Inc                                                                   page 6
Harvard Research Group, Inc.




IBM BladeCenter Blade Servers
    Model #              Processor             Cores   GHz Max   socket   DIMMs   Max     Blades per
                                                Max                s              Mem   BladeCenter H
                                                                                  GB       chassis
HX5                Intel® Xeon                  10       2.67      4       16     256          7
HX5                Intel® Xeon                  10       2.67      2       16     256         14
HX5 & MAX5         Intel® Xeon                  10       2.67      2       56     640          7
HS22V              Intel® Xeon                   4       3.6       2       18     288         14
HS22               Intel® Xeon                   6       3.6       2       12     192         14
HS12               Intel® Xeon                   4       2.83      1        6      24         14
PS704 Express      Power7®                      32       2.4       4       32     256         14
PS703 Express      Power7®                      16       2.4       2       16     128         14
PS702 Express      Power7®                      16       3         2       32     256         14
PS701 Express      Power7®                       8       3         1       16     128         14
PS700 Express      Power7®                       4       3         1        8      64         14
JS12 Express       Power6™                       2       3.8       1        8      64         14
QS22               PowerXCell™ 8i                9       3.2       2        2      32         14

With the doublewide IBM BladeCenter HX5/MAX 5 blade, complete databases can be held in memory accelerating
system performance and enhancing throughput by avoiding the latency associated with more traditional page swapping
requirements. The HX5/MAX5 blade delivers 640 GB of available memory. Customers can populate an entire IBM
BladeCenter H chassis with 7 of these Blade servers giving 4.48 TB in a 9U footprint. The level of virtualization and in-
memory data management supported by the HX5/MAX5 conserves power, saves money on licensing costs, and
reduces environmental conditioning (HVAC and power) and space requirements.

IBM Systems Director

IBM Systems Director is not limited to IBM Blades and can manage other vendor’s blade, rack mount, and tower
servers. IBM Systems Director discovers and provides basic management of network devices from Brocade, BNT
(recently acquired by IBM), Qlogic, Cisco, and others. IBM Systems Director also tightly integrates with VMware’s
vCenter to provide management capabilities for VMs.

IBM Systems Director manages heterogeneous IT environments including Microsoft Windows®, Intel® Linux®,
Power Linux, AIX®, i5/OS®, IBM i, and System z Linux environments across System p, System i®, System x, System
z, BladeCenter, and OpenPower®, as well as x86-based non-IBM hardware.

IBM Systems Director integrates tightly with Tivoli and can report results to other tools including CA, BMC, and EMC.
With IBM's Systems Director customers can pre-configure servers, remotely re-purpose systems and set up automatic
updates (including firmware updates) and recoveries. Systems Director provides either a browser based or command
line interface for visualizing managed systems, how they are interrelated, and displaying system status. Systems Director
common tasks include: discovery, inventory, configuration, system health, monitoring, updates, event notification and
automation across managed systems.

          •    IBM Systems Director VMControl manages virtual environments across multiple virtualization
               technologies and hardware platforms providing visibility and control. VM Control Express is a free
               Systems director plug in
          •    IBM Systems Director’s Predictive Failure Analysis feature monitors system health and generates
               alerts before failure occurs. Alerts trigger preventative action by system administrators or through
               automation to avoid a service outage. Components monitored include: CPUs, Memory, Hard disk drives,
               Voltage regulator modules, Power supply units, temperature sensors, and Fans. IBM passes PFAs to

Copyright © 2011 Harvard Research Group, Inc                                                                       page 7
Harvard Research Group, Inc.



               VMware via Systems Director so vCenter can move the VMs off the server and maintenance can be
               performed with no downtime.
          •    IBM Systems Director Active Energy Manager™ monitors and manages the actual energy usage across
               systems and facilities within the data center in order to maintain service availability within specified energy
               use parameters.
          •    IBM Systems Director Network Control provides integration of server, storage, and network
               management for virtualization environments across platforms. IBM Systems Director Network Control
               will discover, manage, monitor, configure network devices, and enable a unified view of network
               management tasks.

IBM Systems Director and IBM Tivoli manage multiple operating systems, virtualization technologies (VMware, KVM,
Hyper-V, PowerVM, and/or zVM), IBM platforms, and non-IBM platforms including servers, desktop computers,
workstations, notebook computers, storage subsystems, and SNMP devices.

IBM Tivoli®

IBM Tivoli® software provides systems security, storage, monitoring and configuration capabilities. Tivoli incorporates
open systems standards and automation.

Expect an IBM Tivoli UCS monitoring and management agent to be announced toward the end of 2012. This
technology is currently part of an Open Beta program. The UCS agent establishes a link with the Cisco UCS Manager
gaining access to system level information collected by UCS. Through this agent, Tivoli monitors UCS performance,
health, and capacity trending providing a view of application performance. Tivoli will monitor, aggregate up into a
business service view, and visualize hypervisor, application, physical hardware, operating system, storage and network
performance for the Cisco UCS systems. The information collected by the Systems Director UCS agent such as
hardware metrics, hardware load events, and more will be funneled to Tivoli NetCool Omnibus that will act as a pipe to
funnel System Director UCS agent information to higher-level Tivoli products for monitoring, analysis, and
management. Tivoli software will be able to manage UCS Manager’s health as an application and, if there is a server
failure, identify which VMs and applications are impacted and then initiate a VMotion to move those VMs to a healthy
system. IBM currently has access to event information from the UCS Manager through the available UCS APIs. In
addition, IBM has very tight integration throughout the entire VMware stack.

IBM BladeCenter Open Fabric Manager (BOFM)

According to IBM, Blade Center Open Fabric Manager can manage the I/O and network interconnects for up to 256
BladeCenter chassis and up to 3584 blade servers. BladeCenter Open Fabric Manager installed on IBM’s Advanced
Management Module (AMM), lets customers pre-configure their LAN and SAN connections so that I/O expansion
card connections are made automatically assigning or reassigning Ethernet MAC addresses and Fibre Channel WWN
addresses whenever a blade is brought on-line or repurposed.

With BladeCenter Open Fabric Manager installed, the AMM can assign boot device addresses and VLAN tags to
individual devices. Later these assignments can be changed to provide for dynamic provisioning, resource
reconfiguration, and blade replacement in the case of a failover. IBM BladeCenter Open Fabric supports open
standards and industry interoperability across multiple I/O fabrics, including Ethernet, iSCSI, Fibre Channel over
Ethernet (FCoE), Fibre Channel, InfiniBand and Serial attached SCSI (SAS).

Each BladeCenter H chassis comes with one hot swappable AMM that is used to configure and manage all installed
BladeCenter components. BladeCenter H supports the installation of a second, redundant AMM that is recommended


Copyright © 2011 Harvard Research Group, Inc                                                                        page 8
Harvard Research Group, Inc.



for enhanced system availability. Only one advanced management module can control the BladeCenter system at a
time. The AMM provides notification when the primary and standby Advanced Management Modules are established,
and when a fail over automatically occurs.

The Advanced Management Module communicates with each blade server to support features such as blade server
power-on requests, error and event reporting as well as controlling Ethernet and serial port connections for remote
management access.

With regard to integrated systems, customers today are making plans to move to more integrated systems using
Converged Fabric that supports NAS, iSCSI, FCoE, and automated virtualization. Using IBM Systems Director, with
Open Fabric Manager customers can integrate BladeCenter, Cluster 1350, and iDataPlex for scale up, scale out, or a
combination depending on workload requirements.

IBM Virtual Fabric

IBM® Virtual Fabric for IBM BladeCenter is based on the IBM BladeCenter H with 10Gb Converged Enhanced
Ethernet switch modules in the chassis and the Emulex or Broadcom Virtual Fabric Adapters in each blade server.
This configuration delivers up to 20Gb of bandwidth to each blade. Each Virtual Fabric Adapter can split bandwidth
between as many as eight virtual NICs (vNICs).

With IBM System x and BladeCenter, Virtual Fabric solutions from BNT (IBM), Brocade, and Cisco the same network
hardware can act as Ethernet, iSCSI, FCoE, Fibre Channel or iSCSI and bandwidth can be allocated in increments from
100Mb to 10Gb.

          •    Pre-configure over 11,000 LAN and SAN connections once for each blade server.
          •    Manage up to 256 chassis and up to 3,584 blade servers from a single Advanced Management Module.
          •    Virtualize any 10Gb Ethernet, iSCSI, or FCoE switch using Virtual Fabric.
          •    Intelligent Failure Monitoring enables automatic fail over between physical or virtual ports in the event of
               uplink port failure.


Conclusion
Cisco UCS is a good fit for general business workloads, but is not a good fit for many of today’s mission critical
workloads where reduced transactional latency is a requirement. Cisco’s Intel centric approach to the Blade market,
while highly simplified and easy to understand, is not a particularly good fit for many of today’s edge of the web, latency
sensitive, Big Data applications.

IBM’s BladeCenter H is well suited for high transaction rate workloads requiring low latency as well as for many
emerging edge of the web Big Data applications. The flexibility of the IBM Blade Center H solution makes it an
attractive option to the fixed architecture approach of some manufacturers. This increased level of flexibility makes
Blade Center H a good fit for a broad range of compute requirements. For example: new workloads including message
passing HPC, Grid, risk management, and next generation Big Data applications in today’s highly competitive global
markets.




Copyright © 2011 Harvard Research Group, Inc                                                                       page 9
Harvard Research Group, Inc.




                                               Harvard Research Group
                                                  Harvard, MA 01451 USA

                                                     Tel. (978) 456-3939
                                                     Tel. (978) 925-5187

                                                 e-mail: hrg@hrgresearch.com

                                                http://www.hrgresearch.com




                                                                               BLW03026-USEN-00




Copyright © 2011 Harvard Research Group, Inc                                                      page 10

More Related Content

What's hot

Geoff Wilmington - Challenge 1 - Virtual Design Master
Geoff Wilmington - Challenge 1 - Virtual Design Master Geoff Wilmington - Challenge 1 - Virtual Design Master
Geoff Wilmington - Challenge 1 - Virtual Design Master vdmchallenge
 
Future of Power: Aix in Future - Jan Kristian Nielsen
Future of Power: Aix in Future - Jan Kristian NielsenFuture of Power: Aix in Future - Jan Kristian Nielsen
Future of Power: Aix in Future - Jan Kristian NielsenIBM Danmark
 
Adaptec by PMC Series 6H Host Bus Adapters Datasheet
Adaptec by PMC Series 6H Host Bus Adapters DatasheetAdaptec by PMC Series 6H Host Bus Adapters Datasheet
Adaptec by PMC Series 6H Host Bus Adapters DatasheetAdaptec by PMC
 
Fordele ved POWER7 og AIX, IBM Power Event
Fordele ved POWER7 og AIX, IBM Power EventFordele ved POWER7 og AIX, IBM Power Event
Fordele ved POWER7 og AIX, IBM Power EventIBM Danmark
 
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化ITband
 
Filename intelvmwaresolutionbrief asset4
Filename intelvmwaresolutionbrief asset4Filename intelvmwaresolutionbrief asset4
Filename intelvmwaresolutionbrief asset4ReadWrite
 
Esx configuration guide
Esx configuration guideEsx configuration guide
Esx configuration guideNaga Raju N
 
Ahmed akter res_final
Ahmed akter res_finalAhmed akter res_final
Ahmed akter res_finalAkter Ahmed
 
102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guideAmit Sharma
 
Accelerating Mission Critical Transformation at Red Hat Summit 2011
Accelerating Mission Critical Transformation at Red Hat Summit 2011Accelerating Mission Critical Transformation at Red Hat Summit 2011
Accelerating Mission Critical Transformation at Red Hat Summit 2011Pauline Nist
 
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...IBM India Smarter Computing
 
Cisco UCS Solution EMC World 2015
Cisco UCS Solution EMC World 2015Cisco UCS Solution EMC World 2015
Cisco UCS Solution EMC World 2015ldangelo0772
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1Sanjeev Kumar
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopsolarisyougood
 
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...Brian Boyd
 

What's hot (20)

Geoff Wilmington - Challenge 1 - Virtual Design Master
Geoff Wilmington - Challenge 1 - Virtual Design Master Geoff Wilmington - Challenge 1 - Virtual Design Master
Geoff Wilmington - Challenge 1 - Virtual Design Master
 
Future of Power: Aix in Future - Jan Kristian Nielsen
Future of Power: Aix in Future - Jan Kristian NielsenFuture of Power: Aix in Future - Jan Kristian Nielsen
Future of Power: Aix in Future - Jan Kristian Nielsen
 
Adaptec by PMC Series 6H Host Bus Adapters Datasheet
Adaptec by PMC Series 6H Host Bus Adapters DatasheetAdaptec by PMC Series 6H Host Bus Adapters Datasheet
Adaptec by PMC Series 6H Host Bus Adapters Datasheet
 
Ibm aix
Ibm aixIbm aix
Ibm aix
 
Fordele ved POWER7 og AIX, IBM Power Event
Fordele ved POWER7 og AIX, IBM Power EventFordele ved POWER7 og AIX, IBM Power Event
Fordele ved POWER7 og AIX, IBM Power Event
 
IBM FlashSystem 710 and IBM FlashSystem 810
IBM FlashSystem 710 and IBM FlashSystem 810IBM FlashSystem 710 and IBM FlashSystem 810
IBM FlashSystem 710 and IBM FlashSystem 810
 
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
分会场二深入分析Veritas cluster server和storage foundation在aix高可用以及灾难恢复环境下如何对存储管理进行优化
 
Filename intelvmwaresolutionbrief asset4
Filename intelvmwaresolutionbrief asset4Filename intelvmwaresolutionbrief asset4
Filename intelvmwaresolutionbrief asset4
 
Esx configuration guide
Esx configuration guideEsx configuration guide
Esx configuration guide
 
Ahmed akter res_final
Ahmed akter res_finalAhmed akter res_final
Ahmed akter res_final
 
102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide102550121 symmetrix-foundations-student-resource-guide
102550121 symmetrix-foundations-student-resource-guide
 
Accelerating Mission Critical Transformation at Red Hat Summit 2011
Accelerating Mission Critical Transformation at Red Hat Summit 2011Accelerating Mission Critical Transformation at Red Hat Summit 2011
Accelerating Mission Critical Transformation at Red Hat Summit 2011
 
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...
COST/BENEFIT CASE FOR IBM SYSTEM STORAGE DS8700: COMPARISONS WITH EMC SYMMETR...
 
Cisco UCS Solution EMC World 2015
Cisco UCS Solution EMC World 2015Cisco UCS Solution EMC World 2015
Cisco UCS Solution EMC World 2015
 
Aix overview
Aix overviewAix overview
Aix overview
 
VMware Presentation
VMware PresentationVMware Presentation
VMware Presentation
 
VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1VMware vSphere 6.0 - Troubleshooting Training - Day 1
VMware vSphere 6.0 - Troubleshooting Training - Day 1
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshop
 
BladeCenter 101
BladeCenter 101BladeCenter 101
BladeCenter 101
 
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...
EMC Symmetrix VMAX: An Introduction to Enterprise Storage: Brian Boyd, Varrow...
 

Viewers also liked

G06.2012 magic quadrant x86 server virtualization infrastructure
G06.2012 magic quadrant x86 server virtualization infrastructureG06.2012 magic quadrant x86 server virtualization infrastructure
G06.2012 magic quadrant x86 server virtualization infrastructureSatya Harish
 
Whosinit by Sage Jackson
Whosinit by Sage JacksonWhosinit by Sage Jackson
Whosinit by Sage JacksonShane Bruce
 
Gartner cc mq_summary_inin_2011
Gartner cc mq_summary_inin_2011Gartner cc mq_summary_inin_2011
Gartner cc mq_summary_inin_2011Jon Johnson
 
Gartner magic quadrant report
Gartner magic quadrant reportGartner magic quadrant report
Gartner magic quadrant reportSatya Harish
 
Magic Quadrant For Enterprise Backup/Recovery Software
Magic Quadrant For Enterprise Backup/Recovery SoftwareMagic Quadrant For Enterprise Backup/Recovery Software
Magic Quadrant For Enterprise Backup/Recovery SoftwareNetApp
 
Interactive Intelligence Power Point
Interactive Intelligence Power PointInteractive Intelligence Power Point
Interactive Intelligence Power PointPatrickEngelstad
 
Distinguishing, Evaluating, and Selecting Cloud Service Providers
Distinguishing, Evaluating, and Selecting Cloud Service ProvidersDistinguishing, Evaluating, and Selecting Cloud Service Providers
Distinguishing, Evaluating, and Selecting Cloud Service ProvidersGartnerJessica
 
Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems divjeev
 
HP versus Dell - A Server Comparison
HP versus Dell - A Server ComparisonHP versus Dell - A Server Comparison
HP versus Dell - A Server ComparisonAlbie Attias
 
Data Center Optimization
Data Center OptimizationData Center Optimization
Data Center OptimizationBihag Karnani
 

Viewers also liked (11)

G06.2012 magic quadrant x86 server virtualization infrastructure
G06.2012 magic quadrant x86 server virtualization infrastructureG06.2012 magic quadrant x86 server virtualization infrastructure
G06.2012 magic quadrant x86 server virtualization infrastructure
 
Whosinit by Sage Jackson
Whosinit by Sage JacksonWhosinit by Sage Jackson
Whosinit by Sage Jackson
 
Gartner cc mq_summary_inin_2011
Gartner cc mq_summary_inin_2011Gartner cc mq_summary_inin_2011
Gartner cc mq_summary_inin_2011
 
Gartner magic quadrant report
Gartner magic quadrant reportGartner magic quadrant report
Gartner magic quadrant report
 
Magic Quadrant For Enterprise Backup/Recovery Software
Magic Quadrant For Enterprise Backup/Recovery SoftwareMagic Quadrant For Enterprise Backup/Recovery Software
Magic Quadrant For Enterprise Backup/Recovery Software
 
Interactive Intelligence Power Point
Interactive Intelligence Power PointInteractive Intelligence Power Point
Interactive Intelligence Power Point
 
Distinguishing, Evaluating, and Selecting Cloud Service Providers
Distinguishing, Evaluating, and Selecting Cloud Service ProvidersDistinguishing, Evaluating, and Selecting Cloud Service Providers
Distinguishing, Evaluating, and Selecting Cloud Service Providers
 
Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems Magic quadrant for data warehouse database management systems
Magic quadrant for data warehouse database management systems
 
Server training
Server trainingServer training
Server training
 
HP versus Dell - A Server Comparison
HP versus Dell - A Server ComparisonHP versus Dell - A Server Comparison
HP versus Dell - A Server Comparison
 
Data Center Optimization
Data Center OptimizationData Center Optimization
Data Center Optimization
 

Similar to Harvard Research Group Assess Cisco UCS 5100 and IBM BladeCenter H

Cisco ucs e series servers overview
Cisco ucs e series servers overviewCisco ucs e series servers overview
Cisco ucs e series servers overviewIT Tech
 
Epic on UCS tech brief
Epic on UCS tech briefEpic on UCS tech brief
Epic on UCS tech briefJames Maudlin
 
Cisco Ucs Fast Start
Cisco Ucs Fast StartCisco Ucs Fast Start
Cisco Ucs Fast Startjdinneen
 
Cisco ucs management
Cisco ucs managementCisco ucs management
Cisco ucs managementIT Tech
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage EMC
 
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureDeploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureCisco Canada
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCisco Canada
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCisco Canada
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureCisco Canada
 
Blade Server I/O and Workloads of the Future (slides)
Blade Server I/O and Workloads of the Future (slides)Blade Server I/O and Workloads of the Future (slides)
Blade Server I/O and Workloads of the Future (slides)IT Brand Pulse
 
Data Centre Design for Canadian Small & Medium Sized Businesses
Data Centre Design for Canadian Small & Medium Sized BusinessesData Centre Design for Canadian Small & Medium Sized Businesses
Data Centre Design for Canadian Small & Medium Sized BusinessesCisco Canada
 
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...xKinAnx
 
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...solarisyourep
 
Cisco UCS - CA World 2013
Cisco UCS - CA World 2013 Cisco UCS - CA World 2013
Cisco UCS - CA World 2013 Ranjit Nayak
 
Cisco at v mworld 2015 vmworld_sf-2015-hyperconverged
Cisco at v mworld 2015 vmworld_sf-2015-hyperconvergedCisco at v mworld 2015 vmworld_sf-2015-hyperconverged
Cisco at v mworld 2015 vmworld_sf-2015-hyperconvergedldangelo0772
 
Complexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual ConnectComplexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual ConnectPrincipled Technologies
 
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloudCisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloudCisco Canada
 

Similar to Harvard Research Group Assess Cisco UCS 5100 and IBM BladeCenter H (20)

Cisco UNIFIED COMPUTING SYSTEM(UCS)
Cisco UNIFIED COMPUTING SYSTEM(UCS)Cisco UNIFIED COMPUTING SYSTEM(UCS)
Cisco UNIFIED COMPUTING SYSTEM(UCS)
 
Cisco ucs e series servers overview
Cisco ucs e series servers overviewCisco ucs e series servers overview
Cisco ucs e series servers overview
 
Epic on UCS tech brief
Epic on UCS tech briefEpic on UCS tech brief
Epic on UCS tech brief
 
Cisco Ucs Fast Start
Cisco Ucs Fast StartCisco Ucs Fast Start
Cisco Ucs Fast Start
 
Cisco ucs management
Cisco ucs managementCisco ucs management
Cisco ucs management
 
Servers Cisco
Servers CiscoServers Cisco
Servers Cisco
 
UCS
UCSUCS
UCS
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
 
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network InfrastructureDeploying Applications in Today’s Compute, Storage, and Network Infrastructure
Deploying Applications in Today’s Compute, Storage, and Network Infrastructure
 
Compute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid CloudCompute Infrastructure for Hybrid Cloud
Compute Infrastructure for Hybrid Cloud
 
Compute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid CloudCompute Infrastructure for a Hybrid Cloud
Compute Infrastructure for a Hybrid Cloud
 
Deploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network InfrastructureDeploying Applications in Today’s Network Infrastructure
Deploying Applications in Today’s Network Infrastructure
 
Blade Server I/O and Workloads of the Future (slides)
Blade Server I/O and Workloads of the Future (slides)Blade Server I/O and Workloads of the Future (slides)
Blade Server I/O and Workloads of the Future (slides)
 
Data Centre Design for Canadian Small & Medium Sized Businesses
Data Centre Design for Canadian Small & Medium Sized BusinessesData Centre Design for Canadian Small & Medium Sized Businesses
Data Centre Design for Canadian Small & Medium Sized Businesses
 
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
 
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...Presentation   cisco vxi–optimized infrastructure for scaling v mware view wi...
Presentation cisco vxi–optimized infrastructure for scaling v mware view wi...
 
Cisco UCS - CA World 2013
Cisco UCS - CA World 2013 Cisco UCS - CA World 2013
Cisco UCS - CA World 2013
 
Cisco at v mworld 2015 vmworld_sf-2015-hyperconverged
Cisco at v mworld 2015 vmworld_sf-2015-hyperconvergedCisco at v mworld 2015 vmworld_sf-2015-hyperconverged
Cisco at v mworld 2015 vmworld_sf-2015-hyperconverged
 
Complexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual ConnectComplexity comparison: Cisco UCS vs. HP Virtual Connect
Complexity comparison: Cisco UCS vs. HP Virtual Connect
 
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloudCisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
Cisco Connect Vancouver 2017 - Compute infrastructure for a hybrid cloud
 

More from IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

More from IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Recently uploaded

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 

Recently uploaded (20)

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 

Harvard Research Group Assess Cisco UCS 5100 and IBM BladeCenter H

  • 1. HRG Assessment Cisco UCS 5100 and IBM BladeCenter H Introduction Today’s new workloads require High Performance, High Availability, and rapid scaling capabilities. Two key concerns are transactional latency and bottlenecking associated with fail over, migration, and the movement of a single Virtual Machine (VM) or group of VMs from one processor core to another. IT professionals need to prepare for significant change resulting from the rolling adoption of Virtualization, Cloud Computing, and Data Management software like Hadoop, Memcached, and NoSql. The requirement to scale systems to meet customer demand, retain and grow existing customer relationships, stay competitive, and support new products is driving IT change. Continual pressure to reduce operational and capital expenditures while improving IT quality of service is behind many current business and technology changes. Cisco’s Unified Computing System (UCS) 5100 Blade System and IBM’s BladeCenter H are both easy to configure, easy to scale, easy to manage, integrated blade server systems. However, there are important differences between the two solutions. This assessment is based on publicly available information including marketing and sales materials, videos, pod casts, and vendor briefings. Summary • The Cisco approach is an “appliance approach” where the management software (UCS Manager) and hardware (UCS 6120, 6140, and 6248 Fabric Interconnects) are sold as an integrated and inseparable package. Cisco embeds their UCS Manager software in the Cisco UCS Fabric Interconnect switches. Currently, the only way to get UCS Manager is to buy one of these switches. Cisco UCS Manager is a device manager that only manages Cisco Blades, Rack Mount Servers, and other UCS components. • Cisco does not sell system level management and monitoring software instead relying on BMC, EMC, CA, IBM and others to fill this void. • Cisco UCS Manager does not offer Predictive Failure Analysis. However, if a Cisco UCS B-series blade server is set up correctly and fails, the failure will initiate a VMotion to move workloads off of the failed blade to a healthy blade. However, from everything we have read it appears that this action will only be taken after a failure has occurred. • Cisco provides a highly customizable set of XML APIs so that developers and system level software, tools, and utilities providers can integrate their offerings with Cisco UCS Manager. BMC software can work Copyright © 2011 Harvard Research Group, Inc.
  • 2. Harvard Research Group, Inc. through Cisco UCS Manager to stand up, provision, and manage UCS Blade and Rack mount servers as well as Virtual Machines on those servers. • Cisco Blades and Rack mount servers are Intel only. Cisco UCS Manager only manages Cisco Blade and Rack mount servers. • The Cisco UCS solution only offers converged Fibre Channel over Ethernet (FCoE) within the Cisco UCS 5100 chassis and not native Fibre Channel. • In addition to Intel based BladeCenter blades IBM also offers Power6 and Power7 based blades all of which are plug and play integrated within the BladeCenter H chassis. • IBM Systems Director is a standalone software product, a rich device manager, a performance monitor, and a Predictive Failure Analysis (PFA) and alerting tool. IBM Systems Director works with all IBM servers and non-IBM X86 servers to stand up, provision and manage servers, as well as, the virtual machines on those servers. • Due to the architectural differences between IBM’s Blade Center and Cisco’s UCS Blade System, it takes longer to evacuate VM’s from a failing UCS Blade or Rack Mount server to a healthy server than it would take with IBM BladeCenter H. This is because all blade-to-blade and chassis to chassis traffic within a Cisco UCS management domain is routed through the Fabric Interconnect top of rack switch. • IBM offers system level software, tools, and utilities including IBM Systems Director and IBM Tivoli offerings for system monitoring, management, automation, management of Converged Data Center infrastructure, and integration across heterogeneous environments. • IBM Blades and Rack mount servers are Intel, AMD, Power, and System z and all can be managed by IBM Systems Director and IBM Tivoli software offerings. In addition, IBM Systems Director can integrate and manage non-IBM x86 servers including Cisco UCS. Cisco UCS The Cisco UCS approach is an “appliance approach” where the software (UCS Manager) and hardware (UCS 6120, 6140, and 6248 Fabric Interconnects) are sold as an integrated and inseparable package. For a pure play Cisco IT shop, this level of abstraction significantly reduces the time spent in software and hardware deliberation, selection, installation, and implementation. Another benefit for Cisco customers is that they can take a simplified Lego-like approach to scale out. If the IT infrastructure is heterogeneous, and based on open standards, customers should consider how a system like the UCS Blade System will be integrated and managed as part of a heterogeneous data center environment. The Cisco UCS Blade System represents a fixed physical architecture that could limit the ultimate flexibility of this solution. This Cisco UCS 5108 chassis based solution requires 2 identical Top of Rack Fabric Interconnect switches (the 20 port 6120, the 40 port 6140, or the 32 port 6248 Cisco Switches) in order to provide redundancy and a reasonable degree of availability at the Layer 2 Fabric Interconnect Switch level. The Cisco UCS 5100 Series Blade Server Chassis is 6 rack units (6RU) high. A 42U rack can fit 2 Fabric Interconnect switches and 6 Cisco UCS Blade chassis. A chassis can accommodate 2 Fabric Extenders, the 2104 with 4 uplink ports, or the 2208 with 8 uplink ports, and up to 8 half-width, or 4 full-width Cisco UCS B-Series Blade Servers. Cisco servers are currently only available with Intel processors and UCS Manager only manages Cisco UCS certified hardware. Each of the 2 FEX pass-through switches (they do not route traffic) in each Cisco UCS Blade chassis is connected to the south side server ports on one of the Fabric Interconnects. Two Fabric Extenders (FEX) are required by Cisco for availability and fail over purposes. Each FEX has either 4 or 8 north bound 10 Gb uplink ports depending on the specific model. The same number of uplink ports on each FEX must be connected to the south side ports on each of Copyright © 2011 Harvard Research Group, Inc page 2
  • 3. Harvard Research Group, Inc. the Fabric Interconnect (FI) switches such that FEX A will connect only to FI A and FEX B will only connect to FI B in order to preserve system availability, fail over capability and redundancy throughout the Fabric path. The Cisco recommended UCS Blade system configuration calls for each FEX in a 5108 chassis to be connected to only one Fabric Interconnect switch. For the UCS Blade system to deliver its maximum throughput, all uplink ports on each FEX must be connected to one or the other Fabric Interconnect. A configuration with 2 UCS 6248 UP Fabric Interconnects set up in an active/standby configuration and with 2 UCS 2208 FEX installed in each UCS 5108 chassis could only support 4 5108 chassis or 32 half width B-Series blades in a maximum bandwidth configuration. Each 6248 Fabric Interconnect only has 32 ports available for connection to FEX if the optional expansion module ports are used as uplink ports. If a customer requires maximum scale out of capacity, an option is to use only one of the available uplinks on each FEX. In such a configuration, 2 UCS 6248 Fabric Interconnects running in a passive/standby configuration could support up to 32 UCS 5108 chassis or 256 half-width B series blades. This configuration requires the purchase on of an optional expansion module to handle northbound traffic from the 6248. If a customer has a configuration like the one just described, they run the risk of unacceptable levels of transactional latency, over subscription of ports, and bottlenecking on the south side of the 6248. In this scenario, each FEX connects to 8 UCS B series blade servers through the 5108 chassis mid-plane. Basically the 10 Gb converged FCoE Fabric is brought to each FEX through cables plugged into 8 of the south side ports on each of the UCS Fabric Interconnects thereby extending the Cisco 10 Gb FCoE converged network fabric as well as the UCS Manager bi-directional system level communications from the 6248 switch to the in-chassis 2208 FEX, and then from there via the 5108 chassis mid-plane connecting to the B Series Blades. Each of the Fabric Extenders within a UCS 5108 chassis connects to a different top of rack Fabric Interconnect to ensure that there are redundant 10 Gb Cisco converged fabric paths in the event that either one of the FEX or one of the Fabric Interconnect switches fails. The 2 Fabric Interconnect switches are run in active/standby configuration. Only one Fabric Interconnect is actively switching traffic while the other is waiting in standby mode to take over in the event of a failure. Customers may experience a slight delay in the event of a failover while the standby switch takes over from the then failed switch. Additional delay may be experienced, as an actual failure has to occur before failover can be implemented, which means any in-flight transactions may be delayed. This type of architecture, while advantageous for availability, is not optimal when it comes to the potential for physical port and virtual port over subscription of North bound resources by the VMs residing within the 5108 chassis. Another consideration is that with only one Fabric Interconnect in active mode if you are running multiple high transaction workloads on the B Series servers in the chassis you run the risk of increased latency at the Fabric Interconnect. It is specifically for this reason that the Cisco UCS 5108 based solution is not a good fit for many high performance and high transaction rate workloads nor is it appropriate for VM based high transaction rate workloads as, in such a scenario, a customer could run the very real risk of over subscription and increased transactional latency occurring simultaneously. Customers should be aware that there is no native Fibre Channel connectivity available within the Cisco UCS 5108 chassis or within the rack that contains the chassis. However, there is Native Fibre Channel connectivity that is available on the North side of the Fabric Interconnect switch if an appropriate expansion module is purchased. If a customer is currently running native Fibre Channel for SAN connectivity from individual rack mount or blade servers they will need to migrate to a converged 10 Gb FCoE (Fibre Channel over Ethernet) Cisco fabric. For customers doing net new installations, this lack of true Fibre Channel connectivity may not pose a problem. Copyright © 2011 Harvard Research Group, Inc page 3
  • 4. Harvard Research Group, Inc. End Host Mode UCS Fabric Interconnects are configured to initially “power up” into what Cisco calls “End-Host Mode” which does not use Spanning Tree Protocol (STP) to make routing decisions. Running in End Host Mode, the Fabric Interconnect does not consume CPU resources to do STP calculations nor send and receive Bridge Protocol Data Units (BPDU). If a customer needs a standard Layer 2 STP switch, a software configuration change can be made after which the switch needs to be rebooted before it can be used in standard Layer 2 switch mode running the Spanning Tree Protocol. Even when running in End Host Mode, the Fabric Interconnect will use MAC (Media Access Control) address learning and behave as a Layer 2 switch for local traffic. For one blade to communicate with any other blade in the same chassis or another blade in another chassis in the same physical rack, that traffic has to be routed from the first blade up to the Top of Rack FI switch and then from that switch, it is routed to the second blade. In this manner, the Cisco UCS architecture introduces additional latency during blade-to-blade and chassis-to-chassis messaging. Layer 2 switching behavior for all UCS systems below the Fabric Interconnect will result in increased latency when compared to other blade systems that allow direct blade-to-blade communications within the same chassis or within the same rack but on different chassis. This increased latency is of particular interest when moving VMs from one physical server to another physical server as in the case of the evacuation of VMs from a failing server to a healthy server. In this instance, any increased latency will have an impact on the level of service provided by VMs being migrated while handling workloads and in-flight transactions. The current version of Cisco UCS Manager, when configured to run in End Host Mode, allows the use of FCoE to connect a storage array directly to the Fabric Interconnect. In this configuration, Cisco End Host Mode is used to pin a north side Fabric Interconnect port to a VSAN (Virtual Storage Area Network) based storage array. However, with this configuration, it will not be possible to perform either LUN (Logical Unit Number) masking nor do normal storage Zoning as they would if the Fabric Interconnect switch were run in normal Layer 2 switch mode with STP enabled. Configuring the Fabric Interconnect to run as a Layer 2 switch running STP could serve to exacerbate the effect of transaction latency due to additional loading on the switch based CPU, as it would be required to generate, send, and receive BPDUs. One additional bandwidth related concern derives from the observation that the Cisco UCS architecture funnels and aggregates all I/O transactional and management traffic through a single Top of Rack Fabric Interconnect switch. This should be of interest to those customers planning on running a highly virtualized, memory intensive workload environment with the number of VMs dynamically fluctuating as work load and capacity requirements ramp based on business requirements. Our recommendation to customers considering such a solution is to do rigorous modeling of workloads, capacity requirements, transaction and bandwidth requirements in order to avoid any potential Quality of Service or Service Level Agreement surprises. Uniformity The uniformity of the available Cisco UCS B series blades are both a benefit and a limitation. The fact that you can integrate Cisco C series rack mount servers into the same UCS management domain as Cisco B Series Blade Servers is a real benefit for those workloads that require the performance and capacity of a Rack Mounted server. Regarding uniformity, the Cisco UCS system is a Cisco only solution. This means that UCS Manager can only manage Cisco Switches, Blades, Rack Mount servers, CNAs, and in chassis fabric extenders. According to Cisco, UCS provides predictable levels of latency or predictable performance regardless of the physical location of a workload or blade server as long as they are in the same rack because it takes a predictable amount of time (latency) to be accessed through the Fabric Interconnect. Customers are advised to consider whether predictable latency is appropriate for their workloads or if what they really need is reduced (low) latency. Copyright © 2011 Harvard Research Group, Inc page 4
  • 5. Harvard Research Group, Inc. B-Series M2 Blade servers Currently Cisco offers the following blade servers for use in their Cisco UCS 5108 blade chassis. • Cisco UCS B440 M2 – Intel® Xeon® based blade server • Cisco UCS B250 M2 – Intel® Xeon® based blade server • Cisco UCS B230 M2 – Intel® Xeon® based blade server • Cisco UCS B200 M2 – Intel® Xeon® based blade server Cisco UCS B-Series Blade Servers Model # Processor Cores Max GHz Max Sockets DIMMs Max Mem GB Blades per 5108 chassis UCS B440 M2 Intel® Xeon 10 2.4 4 32 512 4 UCS B250 M2 Intel® Xeon 6 3.46 2 48 384 4 UCS B230 M2 Intel® Xeon 10 2.4 2 32 512 8 UCS B200 M2 Intel® Xeon 6 3.46 2 12 192 8 Cisco UCS Manager Cisco UCS Manager’s embedded device management software manages the software and hardware components of the Cisco Unified Computing System ™ across multiple chassis and virtual machines through a Java based GUI, a CLI (command-line interface), or an XML (Extensible Markup Language) Application Programming Interface (API). Service Profiles in the UCS Manager application can be used to set up and configure stateless Intel Xeon based Cisco Blades, Rack mount servers, and virtual machines. Service Profile settings can be ‘moved’ with a virtual machine when it is moved using VMware’s VMotion in the case of a server failure or when reallocating capacity to satisfy changing workload requirements. The XML APIs for the UCS Manager application can be used by 3rd party management tools. Using these APIs Data Center Management software from BMC, CA, EMC, and IBM can provision and decommission servers based on demand. Currently, only BMC and EMC use these APIs to this extent. However, IBM Tivoli will soon have this capability (currently in beta testing) allowing Cisco UCS compute pods or islands of computing to be integrated into a broader, more heterogeneous, Converged Data Center environment. Cisco UCS Manager Service Profiles are created by server, network, and storage administrators and stored on the UCS Fabric Interconnect in an object based data store. Cisco UCS Manager discovers UCS devices that are added, moved, or removed from the UCS system. This information, added to the UCS Manager’s inventory (a light weight CMDB), is saved on the Fabric Interconnect switch. UCS Manager uses this information when deploying Service Profiles to newly discovered resources. When a Service Profile is deployed UCS Manager configures the server, adapters, fabric extenders, fabric interconnects, NICs, HBAs, LAN, and SAN switches. Service Profiles can also be used to enable Virtual Network Link (VN-Link) capabilities for VN-Link supported hypervisors. Cisco UCS supports the VMware ESX, ESXi, Microsoft Hyper-V, and KVM hypervisors. Cisco’s implementation of VMware virtualization uses a UCS specific proprietary version of ESXi. This lets ESX and ESXi run directly on the UCS system hardware, without additional software, providing hypervisor functionality to host guest operating systems such as Windows or Linux on the physical server. Cisco UCS Manager enables Fibre Channel over Ethernet in the UCS internal fabric and preserves traditional Ethernet and Fibre Channel connectivity to LAN and SAN environments North of the Fabric Interconnect. However, there is no true Fibre Channel connectivity South of the Fiber Interconnect and there is no true Fibre Channel connectivity within the UCS Blade system chassis. Copyright © 2011 Harvard Research Group, Inc page 5
  • 6. Harvard Research Group, Inc. Cisco UCS Manager is a device or element management application that is only available from Cisco with the purchase of a Cisco UCS Fabric Interconnect. UCS Manager handles hardware provisioning, configuration, and management but only for UCS certified components such as Cisco B series blades and Cisco C series rack mount servers. UCS manages these servers as stateless devices and uses XML to configure these stateless devices using UCS specific Service Profiles. Cisco UCS Manager ecosystem partners include BMC, CA, Compuware, Dynamic Ops, EMC, HP, IBM, Microsoft, SolarWinds, Symantec, VMware, and Zenoss. Those partners offering the tightest level of integration with Cisco’s UCS environment are EMC, BMC, and soon IBM. IBM BladeCenter H IBM BladeCenter H is an open blade architecture product design focused on processor, memory, and I/O flexible configuration and open to collaboration. This open architecture enables non-IBM companies to develop and build compatible blades, networking and storage switches, and blade adapter cards for inclusion in the IBM BladeCenter by utilizing the Blade Open Specification. The IBM BladeCenter H chassis holds 14 blade servers integrating Power6, Power7, and Intel Blades all within the same chassis as a single image compute resource. IBM currently offers 5 types of Blade chassis. Four IBM BladeCenter H chassis comprising 56 blades with integrated Layer 2 switching will fit into 36U of rack space in an industry standard 42U rack leaving additional room for storage. IBM offers the following servers for use in their IBM BladeCenter H chassis. • IBM BladeCenter HX5 – Intel® Xeon® based blade server • IBM BladeCenter HS22V – Intel® Xeon® based blade server • IBM BladeCenter HS22 – Intel® Xeon® based blade server • IBM BladeCenter HS12 – Intel® Xeon® based blade server • IBM BladeCenter PS704 Express – IBM POWER7™ based blade server • IBM BladeCenter PS703 Express – IBM POWER7™ based blade server • IBM BladeCenter PS702 Express– IBM POWER7™ based blade server • IBM BladeCenter PS701 Express– IBM POWER7™ based blade server • IBM BladeCenter PS700 Express– IBM POWER7™ based blade server • IBM BladeCenter JS12 Express– IBM POWER6™ based blade server • IBM BladeCenter QS22 – IBM PowerXCell™ 8i based blade server IBM BladeCenter offers either integrated or pass thru switching in the chassis providing customers more flexibility when making architectural decisions. IBM Blades and the BladeCenter H Chassis support VMware ESXi, Microsoft Hyper-V, the open source KVM-based Red Hat RHEV-H and PowerVM virtualization hypervisors enabling data center consolidation and high-density compute configurations. Copyright © 2011 Harvard Research Group, Inc page 6
  • 7. Harvard Research Group, Inc. IBM BladeCenter Blade Servers Model # Processor Cores GHz Max socket DIMMs Max Blades per Max s Mem BladeCenter H GB chassis HX5 Intel® Xeon 10 2.67 4 16 256 7 HX5 Intel® Xeon 10 2.67 2 16 256 14 HX5 & MAX5 Intel® Xeon 10 2.67 2 56 640 7 HS22V Intel® Xeon 4 3.6 2 18 288 14 HS22 Intel® Xeon 6 3.6 2 12 192 14 HS12 Intel® Xeon 4 2.83 1 6 24 14 PS704 Express Power7® 32 2.4 4 32 256 14 PS703 Express Power7® 16 2.4 2 16 128 14 PS702 Express Power7® 16 3 2 32 256 14 PS701 Express Power7® 8 3 1 16 128 14 PS700 Express Power7® 4 3 1 8 64 14 JS12 Express Power6™ 2 3.8 1 8 64 14 QS22 PowerXCell™ 8i 9 3.2 2 2 32 14 With the doublewide IBM BladeCenter HX5/MAX 5 blade, complete databases can be held in memory accelerating system performance and enhancing throughput by avoiding the latency associated with more traditional page swapping requirements. The HX5/MAX5 blade delivers 640 GB of available memory. Customers can populate an entire IBM BladeCenter H chassis with 7 of these Blade servers giving 4.48 TB in a 9U footprint. The level of virtualization and in- memory data management supported by the HX5/MAX5 conserves power, saves money on licensing costs, and reduces environmental conditioning (HVAC and power) and space requirements. IBM Systems Director IBM Systems Director is not limited to IBM Blades and can manage other vendor’s blade, rack mount, and tower servers. IBM Systems Director discovers and provides basic management of network devices from Brocade, BNT (recently acquired by IBM), Qlogic, Cisco, and others. IBM Systems Director also tightly integrates with VMware’s vCenter to provide management capabilities for VMs. IBM Systems Director manages heterogeneous IT environments including Microsoft Windows®, Intel® Linux®, Power Linux, AIX®, i5/OS®, IBM i, and System z Linux environments across System p, System i®, System x, System z, BladeCenter, and OpenPower®, as well as x86-based non-IBM hardware. IBM Systems Director integrates tightly with Tivoli and can report results to other tools including CA, BMC, and EMC. With IBM's Systems Director customers can pre-configure servers, remotely re-purpose systems and set up automatic updates (including firmware updates) and recoveries. Systems Director provides either a browser based or command line interface for visualizing managed systems, how they are interrelated, and displaying system status. Systems Director common tasks include: discovery, inventory, configuration, system health, monitoring, updates, event notification and automation across managed systems. • IBM Systems Director VMControl manages virtual environments across multiple virtualization technologies and hardware platforms providing visibility and control. VM Control Express is a free Systems director plug in • IBM Systems Director’s Predictive Failure Analysis feature monitors system health and generates alerts before failure occurs. Alerts trigger preventative action by system administrators or through automation to avoid a service outage. Components monitored include: CPUs, Memory, Hard disk drives, Voltage regulator modules, Power supply units, temperature sensors, and Fans. IBM passes PFAs to Copyright © 2011 Harvard Research Group, Inc page 7
  • 8. Harvard Research Group, Inc. VMware via Systems Director so vCenter can move the VMs off the server and maintenance can be performed with no downtime. • IBM Systems Director Active Energy Manager™ monitors and manages the actual energy usage across systems and facilities within the data center in order to maintain service availability within specified energy use parameters. • IBM Systems Director Network Control provides integration of server, storage, and network management for virtualization environments across platforms. IBM Systems Director Network Control will discover, manage, monitor, configure network devices, and enable a unified view of network management tasks. IBM Systems Director and IBM Tivoli manage multiple operating systems, virtualization technologies (VMware, KVM, Hyper-V, PowerVM, and/or zVM), IBM platforms, and non-IBM platforms including servers, desktop computers, workstations, notebook computers, storage subsystems, and SNMP devices. IBM Tivoli® IBM Tivoli® software provides systems security, storage, monitoring and configuration capabilities. Tivoli incorporates open systems standards and automation. Expect an IBM Tivoli UCS monitoring and management agent to be announced toward the end of 2012. This technology is currently part of an Open Beta program. The UCS agent establishes a link with the Cisco UCS Manager gaining access to system level information collected by UCS. Through this agent, Tivoli monitors UCS performance, health, and capacity trending providing a view of application performance. Tivoli will monitor, aggregate up into a business service view, and visualize hypervisor, application, physical hardware, operating system, storage and network performance for the Cisco UCS systems. The information collected by the Systems Director UCS agent such as hardware metrics, hardware load events, and more will be funneled to Tivoli NetCool Omnibus that will act as a pipe to funnel System Director UCS agent information to higher-level Tivoli products for monitoring, analysis, and management. Tivoli software will be able to manage UCS Manager’s health as an application and, if there is a server failure, identify which VMs and applications are impacted and then initiate a VMotion to move those VMs to a healthy system. IBM currently has access to event information from the UCS Manager through the available UCS APIs. In addition, IBM has very tight integration throughout the entire VMware stack. IBM BladeCenter Open Fabric Manager (BOFM) According to IBM, Blade Center Open Fabric Manager can manage the I/O and network interconnects for up to 256 BladeCenter chassis and up to 3584 blade servers. BladeCenter Open Fabric Manager installed on IBM’s Advanced Management Module (AMM), lets customers pre-configure their LAN and SAN connections so that I/O expansion card connections are made automatically assigning or reassigning Ethernet MAC addresses and Fibre Channel WWN addresses whenever a blade is brought on-line or repurposed. With BladeCenter Open Fabric Manager installed, the AMM can assign boot device addresses and VLAN tags to individual devices. Later these assignments can be changed to provide for dynamic provisioning, resource reconfiguration, and blade replacement in the case of a failover. IBM BladeCenter Open Fabric supports open standards and industry interoperability across multiple I/O fabrics, including Ethernet, iSCSI, Fibre Channel over Ethernet (FCoE), Fibre Channel, InfiniBand and Serial attached SCSI (SAS). Each BladeCenter H chassis comes with one hot swappable AMM that is used to configure and manage all installed BladeCenter components. BladeCenter H supports the installation of a second, redundant AMM that is recommended Copyright © 2011 Harvard Research Group, Inc page 8
  • 9. Harvard Research Group, Inc. for enhanced system availability. Only one advanced management module can control the BladeCenter system at a time. The AMM provides notification when the primary and standby Advanced Management Modules are established, and when a fail over automatically occurs. The Advanced Management Module communicates with each blade server to support features such as blade server power-on requests, error and event reporting as well as controlling Ethernet and serial port connections for remote management access. With regard to integrated systems, customers today are making plans to move to more integrated systems using Converged Fabric that supports NAS, iSCSI, FCoE, and automated virtualization. Using IBM Systems Director, with Open Fabric Manager customers can integrate BladeCenter, Cluster 1350, and iDataPlex for scale up, scale out, or a combination depending on workload requirements. IBM Virtual Fabric IBM® Virtual Fabric for IBM BladeCenter is based on the IBM BladeCenter H with 10Gb Converged Enhanced Ethernet switch modules in the chassis and the Emulex or Broadcom Virtual Fabric Adapters in each blade server. This configuration delivers up to 20Gb of bandwidth to each blade. Each Virtual Fabric Adapter can split bandwidth between as many as eight virtual NICs (vNICs). With IBM System x and BladeCenter, Virtual Fabric solutions from BNT (IBM), Brocade, and Cisco the same network hardware can act as Ethernet, iSCSI, FCoE, Fibre Channel or iSCSI and bandwidth can be allocated in increments from 100Mb to 10Gb. • Pre-configure over 11,000 LAN and SAN connections once for each blade server. • Manage up to 256 chassis and up to 3,584 blade servers from a single Advanced Management Module. • Virtualize any 10Gb Ethernet, iSCSI, or FCoE switch using Virtual Fabric. • Intelligent Failure Monitoring enables automatic fail over between physical or virtual ports in the event of uplink port failure. Conclusion Cisco UCS is a good fit for general business workloads, but is not a good fit for many of today’s mission critical workloads where reduced transactional latency is a requirement. Cisco’s Intel centric approach to the Blade market, while highly simplified and easy to understand, is not a particularly good fit for many of today’s edge of the web, latency sensitive, Big Data applications. IBM’s BladeCenter H is well suited for high transaction rate workloads requiring low latency as well as for many emerging edge of the web Big Data applications. The flexibility of the IBM Blade Center H solution makes it an attractive option to the fixed architecture approach of some manufacturers. This increased level of flexibility makes Blade Center H a good fit for a broad range of compute requirements. For example: new workloads including message passing HPC, Grid, risk management, and next generation Big Data applications in today’s highly competitive global markets. Copyright © 2011 Harvard Research Group, Inc page 9
  • 10. Harvard Research Group, Inc. Harvard Research Group Harvard, MA 01451 USA Tel. (978) 456-3939 Tel. (978) 925-5187 e-mail: hrg@hrgresearch.com http://www.hrgresearch.com BLW03026-USEN-00 Copyright © 2011 Harvard Research Group, Inc page 10