{DESCRIPTION} This screen displays a right-align front view image of the IBM blades and rack servers. {TRANSCRIPT} Welcome to IBM iDataPlex™ Internet Scale Computing. This is Topic 4 in a series of topics on System x Technical Principles.
{DESCRIPTION} This screen lists the topic objectives. {TRANSCRIPT} The objectives of this course of study are: List three emerging technologies for an iDataPlex solution List three goals that iDataPlex addressed Identify elements of the iDataPlex rack design Match the server offering to its characteristics
{DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} This topic introduces IBM System x™ iDataPlex™ innovative design solution for large-scale data centers, list advantages of iDataPlex optimized rack design, and data center power and cooling efficiencies, and identify iDataPlex flexible configuration in its 2U and 3U chassis.. Finally, we will discuss iDataPlex management features.
{DESCRIPTION} This screen displays four circles and each circle is connected to an image of a server portfolio: (clockwise – Enterprise eX5, BladeCenter, Enterprise rack and towers and Enterprise eX5. {TRANSCRIPT} IBM System x™ iDataPlex™, is a flexible, massive scale-out data center server solution built on industry standard components for customers who are looking for compute density and energy efficiency. iDataPlex application positioning against the BladeCenter, rack, and high-end scale-up eX5 technology servers is an important aspect to consider. The iDataPlex is positioned for HPC, and Grid computing, whereas the high-end servers are excellent in server consolidation and virtualization solutions. BladeCenter offerings are well positioned for scale-out solutions in infrastructure simplification and application serving. iDataPlex is the right choice for customers that are: Facing power, cooling and density challenges Having software redundancy built into their application and are comfortable with lower hardware redundancy Focusing on lowering their capital expense and operating expense
{DESCRIPTION} This screen displays a right-align front view image of the iDataPlex 100U rack. {TRANSCRIPT} The iDataPlex product was build on market needs, draws on all IBM’s capability and positions us as a leader. It is a word with three meanings: i for internet, Data is for DataCenter and Plex means multiple. IBM iDataPlex is a data center solution for high performance computing (HPC) cluster and corporate batch processing customers experiencing limitations of electrical power, cooling, physical space, or a combination of these. By providing a "big picture" approach to the design, iDataPlex uses innovative ways to integrate Intel-based processing at the node, rack, and data center levels to maximize power and cooling efficiencies while providing the compute density needed. A key component of the iDataPlex solution is its optimized rack design — which doubles server density per rack. It is built with industry-standard components to create flexible configurations of servers, chassis, and networking switches that integrate easily. This allows customers to configure customized solutions for applications to meet their specific business needs for computing power, storage intensity, and the right I/O and networking. It also provides an ease to service management and quick access without having to remove chassis and other components.
{DESCRIPTION} This screen displays left-align view image of a group of IBM tower servers, a thermostat with a hand turning the dial, and a energy efficient bulb. {TRANSCRIPT} In order to meet the demands and to build upon customers’ requirements and their source: Expense frameworks, power & cooling issues, with flexibility and the ability to grow and deploy, there are three important areas to focus: Increase compute density by 10x, Eliminate Data Center Air conditioning, and Decrease server power consumption by 50%.
{DESCRIPTION} This screen displays a right-align front view image of the iDataPlex rack. {TRANSCRIPT} IBM continue to lead the industry in x86 innovation by contributing great investments into IBM iDataPlex to solve the needs of large-scale datacenters. IBM iDataPlex racks and nodes are designed specifically to address data center space and power-constraint challenges— using up to 40% less power than similarly configured standard 1U servers with an innovative half-depth design that provides better power and cooling efficiency. Customers can go green and save with iDataPlex efficient and cost-effective design. It’s design maximizes the amount of computing that can be deployed in the data center within limited floor space, power and cooling envelopes. iDataPlex supports the full range of iDataPlex configurations, the servers are easy to maintain, with individually serviceable servers and front access to all hard drives and cabling. The flexible design allows the chassis and racks to be configured to meet specific customer requirements, whether maximum compute density, more storage or I/O density, or a combination to create the specific rack-level computing environment the client needs. The iDataPlex rack is delivered as a pre-integrated solution, so the servers can be deployed and get to work quickly. Finally, iDataPlex servers have common firmware and management with the System x portfolio, providing robust and consistent management across the data center. In environments where higher rack power levels have caused customers to relocate their servers in an effort to maintain cooling, resulting in using up valuable and expensive raised floor space. IBM anticipated this trend and developed the IBM Rear Door Heat eXchanger for IBM Enterprise Racks. The Rear Door Heat eXchanger's liquid cooling design removes the heat generated from a fully populated rack and then releases cooler air which exits the rear of the rack. This simple, cost effective, easily installable solution can save on valuable floor space, reduce the heat load to the data center environment and eliminate hot spots within the data center.
{DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Our next topic is the iDataPlex optimized rack design, and data center power and cooling efficiencies.
{DESCRIPTION} This screen contains diagrams that illustrates the air depth in a traditional enterprise rack environment versus an iDataPlex rack environment. {TRANSCRIPT} In today’s fast-paced IT environment, overcrowded data centers are becoming more and more common—which means IT managers are simply running out of room to expand. An iDataPlex solution can help with these problems with its unique rack design optimized to save floor space. The innovative rack architecture more than doubles the server density over standard 1U racks, so you can pack more processing power into a highly efficient, compact system without adding more floor space to your data center. Important points to consider: iDataPlex optimized rack design, doubles server density per rack, maximize number of servers in data center because airflow and cooling issues are solved, and save with great floor space utilization Air flow efficiency equals fan power savings with its shallow depth rack reduces the amount of air needed to cool by half, and cuts cooling costs 20% compared to equivalent compute power in an enterprise rack And with the Rear Door Heat eXchanger provides the ultimate in cooling savings — v irtually eliminates heat exhaust from the rack
{DESCRIPTION} This screen displays a front view image of the iDataPlex 100U rack and a traditional 42-rack {TRANSCRIPT} As mentioned earlier, a typical iDataPlex solution consists of multiple fully populated rack installations. The groundbreaking iDataPlex solution offers increased density in its rack cabinet design. In that sense, the iDataPlex rack is essentially two 42 units racks connected together and provides additional vertical bays. It uses the dimensions of a standard 42 unit enterprise rack but can hold 102 units of equipment, populated with up to 84 servers, plus 16 1U vertical slots for switches, appliances, and power distribution units (PDUs). It also contains 2 1U horizontal slots at the bottom for the iDataPlex Rack management or other low-power/infrequent-access devices. This added density addresses the major problems that prevent most data centers today from reaching their full capacity: insufficient electrical power and excess heat. iDataPlex’s efficiency results in more density within the same infrastructure, even in a standard rack, allowing you to g et “more on the floor” with iDataPlex. The iDataPlex rack is shallower in depth compared to a standard 42U server rack as shown in diagram in the lower left. The iDataPlex rack is 600 mm deep (840 mm with the Rear Door Heat eXchanger) compared to the 42U rack which is 1050 mm deep. The shallow depth of the rack and the iDataPlex nodes is part of the reason that the cooling efficiency of iDataPlex is higher than the traditional rack design, because air travels a much shorter distance to cool the internals of the server compared to airflow in a traditional rack. The increased air pressure resulting from the shorter distance through the rack and the 2U and 3U chassis four larger fans makes for one of the most efficient air-cooled solutions on the market. This allows racks to be positioned much closer together, actually eliminating the need for “hot aisles” between rows of fully populated racks. And, all this is before adding the effects of including the innovative and incredibly effective Rear Door Heat eXchanger.
{DESCRIPTION} This screen displays a front view image of the iDataPlex 100U rack and a traditional 42-rack {TRANSCRIPT} The iDataPlex solution is made up of many racks of servers that have been custom designed and built to meet the customer's needs for maximum compute power, hybrid CPU and GPU acceleration, storage intensity, and the right I/O and networking. This includes compute nodes, local storage, networking, power distribution, cooling, and management. Each customized solution is integrated and tested by IBM during the manufacturing process. When the iDataPlex is delivered, it is ready to plug in the power feed and network connection, and deploy the software to each node. This means that iDataPlex provides a custom-designed and factory-integrated solution that provides easy deployment and simplified management. The flexible design of iDataPlex provides cost-efficient servers in configurations to meet many needs. Each node design has a common power supply and fan assembly for all models, to minimize costs and maximize the benefits of standardization. The basis for the flex nodes is an industry standard motherboard based on the SSI specification. In addition to flexibility at the server level, iDataPlex offers flexibility at the rack level. It can be cabled either through the bottom, if it's set on a raised floor, or from the ceiling. Front-access cabling and Direct Dock Power enable you to make changes in networking, power connections, and storage quickly and easily. The rack also supports multiple networking topologies including Ethernet, InfiniBand, and Fibre Channel. As you can see, iDataPlex offers a flexible set of configurations created from common building blocks. These configurations are either computationally dense, I/O rich, or storage rich. This modular approach to server design keeps costs low while providing a wide range of node types.
{DESCRIPTION} This screen displays an image of the iDataPlex rack with the Rear Door Heat eXchanger attached, and displays close-up images of the hex airflow design, standard hose fitting featuring sealed internal coils, and its swing door ability. {TRANSCRIPT} With the optional IBM Rear Door Heat eXchanger as part of an iDataPlex solution, can provide a high-density data center environment that can alleviate the cooling challenges. The Rear Door Heat eXchanger is a water-cooled door that is mounted to the rear of the IBM iDataPlex rack to cool the air that is heated and exhausted by the devices inside the rack. The Rear Door Heat eXchanger's sealed coil design when filled with above dew-point, delivers chilled, conditioned water through a supply hose to the heat exchanger. A return hose delivers warmed water back from the heat exchanger. That means the air exiting the rear of the rack can actually be cooler than the air going into the rack. The IBM Rear Door Heat eXchanger requires a cooling distribution unit (CDU) in your data center. It connects to your water system with two quick-connects on the bottom of the door. The door swings open so you can still access the PDUs at the rear of the rack without unmounting the heat exchanger. Service clearance is the same as for a standard rear door installation. The heat exchanger does not require electricity.
{DESCRIPTION} This screen displays a right-align rear view image of the IBM iDataPlex dx360 M3 100U rack, and contains two images that displays thermal cooling – top image with the Rear Door Heat eXchanger attached and bottom image is without the Rear Door Heat eXchanger. {TRANSCRIPT} The innovative iDataPlex design does more than just save power and space—it also helps save cooling costs. With further adjustments, the Rear Door Heat eXchanger can help cool the room. In fact, it can even go beyond that, to the point of helping to cool the data center itself and reducing the need for Computer Room Air Conditioning units (CRACs). With the addition of the optional Rear Door Heat eXchanger water-cooled door provides energy savings from not having to cool air with fans or blowers elsewhere in the computer room, as is done with conventional computer room air conditioner (CRAC) units. Because of the design of the iDataPlex rack, the Rear Door Heat eXchanger has a large surface area for the number of servers it cools, making it very efficient. It can greatly reduce, or even eliminate the requirement for additional cooling in the server room, freeing space that is occupied by the numerous CRAC units that are usually required. For customers who are able to cool their data centers with water, the Rear Door Heat eXchanger can withdraw 100% or more of the heat coming from a 100,000 BTU (approximately 30 kilowatt-hours (kWh)) rack of servers to alleviate the cooling challenge that many data centers are having. By selecting the correct water inlet temperature and water flow rate, you can achieve optimal heat removal. The images shown are thermal images taken of a person standing beside an iDataPlex rack under test in the IBM Thermal Lab. The top image shows water flow off and the bottom image shows water flow on when the heat exchanger was operational. Even without water-cooling, the iDataPlex solution is still at least 20% cooler than the conventional rack approach.
{DESCRIPTION} This screen displays a right-align front view image of the IBM iDataPlex 100U rack. {TRANSCRIPT} For ease of serviceability, all access to hard drives, planar, and I/O is from the front of the rack. There is no need to access the rear of the iDataPlex rack for any servicing except for the Rear Door Heat eXchanger. Additional easy to service points are: Swappable server trays in chassis Blade-like design with chassis docking into power connector Chassis guides keep upper servers in place Rack-side pockets for cables provide highly efficient cable routing. Again, all cables, except power (PDUs), are routed out the front of the chassis and other components creating an ease to service management Flexible support options from self maintenance to 24x7x4 response time One phone number for all support
{DESCRIPTION} This screen displays a right-align front view image of the IBM iDataPlex dx360 M3 100U rack, and a diagram that illustrates the air flow of 100U rack. {TRANSCRIPT} iDataPlex innovative rack solution is designed with emphasis on: Energy efficiency – Optimizes airflow for cooling efficiency with half-depth rack – Reduces pressure drop to improve chilled air efficiency Leadership density – Dual column / Half-depth rack – Standard two-floor tile rack footprint – Up to 168 physical nodes in 8 square feet Flexibility – Matches US & European data center floor tile standards – Compatible with standard forced air environments Ease of use – All service and cabling from the front
{DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Next, we will introduce the iDataPlex nodes.
{DESCRIPTION} This screen displays images of the IBM iDataPlex dx360 M2, dx360 M3, dx360 M3 Refresh and dx360 M3 3U Storage server. {TRANSCRIPT} The iDataPlex portfolio continues to evolve to meet the computing requirements in the data center of today and tomorrow. IBM introduced the dx360 M2 in March 2009, based on Intel Nehalem processors which provides maximum performance while maintaining outstanding performance per watt with the highly efficient iDataPlex design. In March 2010, IBM introduced the dx360 M3, increasing our performance and efficiency with the new Intel Westmere processors and new server capabilities which we will go into more detail on in the next few charts. In May 2010, IBM introduced a 3-slot riser card that supports 2 NVIDIA Graphics Processing Units (GPU) and a high bandwidth adapter. We also have a 3U chassis available with the dx360 M3 server, which provides up to 12 3.5” SAS or SATA hard disk drives, up to 24TB per server for large capacity local storage. The iDataPlex portfolio also comes with 3-year customer replaceable unit and onsite limited warranty. Again, within the iDataPlex rack we can mix these offerings to provide the specific rack-level solution that the client is looking for.
{DESCRIPTION} This screen highlights the IBM’s marketing strategy, and displays an interior image of the dx360 M3 2U I/O located in the right-hand side of the screen. {TRANSCRIPT} IBM System x dx360 M3 Refresh provides new options that significantly increase the flexibility of iDataPlex. Starting with our new I/O capabilities, which will support up to 2 very large x16 PCIe adapters such as the new NVIDIA M2050, M2070 or M2070Q GPU cards, plus a high bandwidth network adapter, and high capacity at up to 6Gbps performance. In addition, dx360 M3 offers increased memory capacity to 192GB per server. IBM System x iDataPlex Acceleration node architecture is the next-generation data center solution for clients who find limitations in their EXA-scale computing environments. By delivering customized solutions that help reduce overall data center costs, IBM addresses the business growth challenges in large-scale data centers. iDataPlex incorporates innovative ways to integrate a hybrid Intel-based processor with NVIDIA GPU acceleration for efficiency at the node to drive more density in the rack, and TCO advantage in the data center.
{DESCRIPTION} This screen displays a front view image of the dx360 M3 new I/O tray with red arrows identifying component located in the front of the unit. {TRANSCRIPT} IBM System x dx360 M3 new Graphical Processing Unit (GPU) I/O tray featuring a 3 slot riser card, allowing 2 full height full length 1.5 wide cards (such as the NVIDIA M2050, M2070 or M2070Q ) in the top of the chassis with x16 connectivity. In addition there is an open x8 slot designed to accommodate a high bandwidth adapter such as InfiniBand or 10Gb Ethernet, or Converged Network adapter. The dx360 M3 GPU I/O tray also has an internal slot that will accommodate a RAID adapter, providing full 6Gbps performance for up to four 2.5” drives. As mentioned earlier, when compared to outboard solutions, each iDataPlex GPU server is individually serviceable. In the event of a problem with a GPU card, sparing of GPUs becomes much simpler, as each card can be replaced individually, instead of an outboard unit that contains 4 cards. The significant I/O capabilities also provide for maximum local storage performance with RAID. Also, GPU’s will be provided as part of the Intelligent Cluster integrated solution from IBM, so when there is an issue there is only one number to call for resolution.
{DESCRIPTION} This screen displays a graphic illustrating the cores for the CPU and GPU. {TRANSCRIPT} The use of the GPU to do mathematical computations is one way to meet these increasing application demands. GPU computing is the use of the CPU and the GPU together. The IBM iDataPlex dx360 M3 is powered by both Intel Xeon CPUs and NVIDIA Tesla GPUs, and is designed to be clustered with other dx360 M3 modular servers to form a supercomputer. GPUs have evolved from just doing graphics to becoming general purpose processors that can do scientific computing. Graphics is after all a mathematical problem, a sub-set of scientific computing, and using CPUs and GPUs together is about choosing the right processor for the right job. Whereas a CPU is great for sequential computing, a GPU is best for parallel computing. An example from every day is Excel (or photo-editing). Whereas launching the application is completely sequential and should run on the CPU, the mathematical computations in Excel (the image editing filters in photo editing) run best on the GPU. Supercomputers are measured by the industry standard Linpack benchmark that tests double precision performance. Today it takes 8 racks – or about 600 CPUs - just to be 500 th in the top500. A simple 1 rack GPU cluster gets you on the top500 (about 430 or so). Eight racks of GPUs, the same number of racks just to get a CPU on the top500, gets you into the top 25 fastest computers in the world. And for the same performance a GPU cluster consumes 1/6 the power so it’s more efficient and costs less to run.
{DESCRIPTION} This screen displays two images of the iDataPlex 100U racks and also shows the stages in growth when implementing GPUs in the iDataPlex products. {TRANSCRIPT} Compared to first-generation Intel® Xeon® processor-based iDataPlex servers, the dx360 M3 server with 2 GPUs improves performance density in the data center for massive parallel computations after software porting. IBM dx360 M3 Graphical Processing Unit (GPU) I/O capabilities can increase node density with GPU acceleration by consolidating for an example, eight dx360 servers (using Westmere CPU) with a capacity of 1 TeraFlop into one dx360 server with two NVIDIA GPUs for the same capacity 1TeraFlop — delivering 72% less power consumption in flops and wattage. Therefore reducing acquisition costs by 65% with nearly 10 times more performance per server.
{DESCRIPTION} This screen displays an image that identifies the location of the internal connectors on the dx360 M3 Flex Node system board tray. {TRANSCRIPT} The dx360 M3 is a 1U server, available as machine type 6391, that fits into both the 2U Flex node and 3U chassis, each supporting two trays containing various combinations of server(s), storage, and I/O. Each tray in the dx360 M3 can be configured with a two-socket Intel Xeon 5600 or 5500 processors with 16 memory slots, or up to 128GB of memory capacity. Each tray also has room for two 2.5-inch disks and two PCI Express 2.0 slots for linking in InfiniBand or Ethernet connectivity above and beyond the two Gigabit Ethernet ports on the system board and the 100Mbit Ethernet port for the on-board service processor.
{DESCRIPTION} This screen displays a front view image of the dx360 M3 2U and 3U servers’ system board and 1U expansion trays. {TRANSCRIPT} IBM 2U flex chassis and a 3U chassis can be configured for high-capacity storage requirements to meet a large variety of business needs through an extensive portfolio. Both chassis can be ordered in several different configurations: The Compute Intensive server is a system-board tray designed with one PCI-E adapter connector and one 3.5-inch hard disk drive bay or two 2.5-inch hot-swappable hard disk drive bays (depending on the configuration it is attached to). The 2U Compute + Storage server consists of one system-board tray with the 1U storage expansion unit that is installed in a 2U chassis. The storage expansion unit provides four additional 3.5-inch hard disk drive bays for the system-board tray, for a combined total of five. You can configure the 2U storage server with up to five 3.5-inch hard disk drives. The Acceleration Compute + Input/output server consists of one system-board tray and an I/O expansion tray that is installed in a 2U chassis. You can configure up to eight 2.5-inch hard disk drives, and up to two PCI-E adapters. The 3U storage server consists of one system-board tray and a triple storage expansion unit that is installed in a 3U chassis. The 3U chassis supports up to twelve 3.5-inch hot-swappable hard disk drives and one PCIe adapter. Additional option cards are supported via a riser slot in the system board. When using a 3U storage server configuration, the hard disk drive bay in the system-board tray is not used.
{DESCRIPTION} This screen displays a front view image of the dx360 M3 and a close-up image of the Light Path Diagnostic Panel. {TRANSCRIPT} In addition to the multiple configuration options, located in the center of each system-board tray are the controls, connectors, and LEDs. Reading from left to right, the server has one RS232 serial port, one VGA port (wired to an onboard Matrox G200 graphics adapter supporting resolutions up to 1280x1024), two USB 2.0 ports, one 10/100 Mbps RJ45 connector for dedicated systems management (wired to the Integrated Management Module (IMM)), and two 1 Gbps Ethernet interfaces based on the Intel 82575 controller.
{DESCRIPTION} This screen displays a topology Intel 5520 Tylersburg Chipset illustrating the connection of the two processors, eighteen DIMM slots, PCIe slots, and the I/O Controller Hub (ICH). {TRANSCRIPT} IBM dx360 M3 processor subsystem contains an Intel 5520 (Tylersburg) Chipset supporting Intel’s latest Intel® Xeon 5600 Series processors at up to 6.4 GT (Giga-Transfers) per second via two separate point-to-point QuickPath Interconnect (Intel QPI) links. The Xeon 5600 series are based on the 32nm Technology with 2 nd Generation High-K Process and are capable of supporting 4- and 6-cores per processor package. The Intel QPI is designed for increased bandwidth and low latency. It can achieve data transfer speeds as high as 25.6 GB/sec. The Intel 5520 Chipset delivers dual x16 Gen2 or quad x8 PCI Express 2.0 graphics card support. In addition, it contains an integrated memory controller that supports three channels of lower power DDR3 memory with up to 1333 MHz providing three cache levels hierarchy, 32 KB data / 32 KB instruction of L1 cache, 256 KB of L2 memory cache per core, and a fully shared 8 MB of L3 cache (with a maximum 12MB) that is shared among all cores to match the needs of various applications.
{DESCRIPTION} This screen lists the iDataPlex dx360 M3 advanced, standard and basic processor SKUs. {TRANSCRIPT} The dx360 M3 will support all of the new Westmere-EP 5600 series CPU’s up through the 95W bin. It is important to understand that not all Westmere processors from Intel are 6 core processors . The Advanced line-up at the top have three 6-core processors with speeds up to 2.93GHz, it also includes a 4-core (3.06GHz) processor. In the Standard line-up, all the processors are 4-core like Nehalem 5500 series but have increased from 8 to 12MB cache over Nehalem. The Basic lineup are actually Nehalem-EP 5500 series processors, continuing on from the previous generation. On the right are the Low Voltage 6-core, 60W processor and 4-core 40W processors. These processors are tailored for clients who are willing to pay a premium to get the lowest power draw possible. The dx360 M3 does not support 130W SKUs. The 130W processors would provide a small performance improvement over the top bin 95W processor, but with increased processor cost. However, the significant power and cooling increase of the 130W processors (70W per server, nearly 6KW/rack), and redesigning the server for 130W processors would reduce the efficiency of the server for 95W and below deployments. The over-riding values of an iDataPlex solution in data centers for clients is memory bandwidth and highest efficiency at the lowest cost, and the small performance benefit of the 130W processors would not be justifiable. Another thing to note is the memory bandwidth. Only the Advanced CPU’s provide 1333MHz memory bandwidth, whereas the Standard and Low Power provide 1066MHz bandwidth and the Basic provide 800MHz bandwidth. Although not listed on this chart, the full lineup of Intel Nehalem 5500 series processors remains available and supported on the dx360 M3 as well. To stay current with the latest support processor visit the IBM ServerProven Web site.
{DESCRIPTION} This screen displays the Intel Xeon architecture block diagram that illustrated key features that help enhanced the dx360 M3 processor subsystem. {TRANSCRIPT} Intel Xeon® 5600 features and benefits are build on the Xeon® 5500 leadership capabilities. This new CPU and platform architecture delivers better performance/wattage and lower power consumption than its predecessor. The foundational improvements to the server platform architecture complement the new microarchitecture for dramatic improvement in native platform performance such as: Quick Path Interconnect, integrated memory controller and native DDR3 memory (improve memory access speed, lower latency, with more memory capacity). It also supports PCIe2, and 10Gb Ethernet. With this new CPU comes outstanding innovations in processor technologies like the Intel® Intelligent Power Technology, Integrated Power Gates and Automated Low-Power States technologies help lowers energy costs by automatically putting processor and memory into the lowest available power state to meet the current workload while minimizing impact on performance. And it offers CPU Power Management to optimized power consumption through more efficient Turbo Boost and memory power management.
{DESCRIPTION} This screen displays an architecture block diagram identifies the flow pattern of the connectors/components of the dx360 M3 planar board. {TRANSCRIPT} This block diagram illustrates the functional paths of the major components on the dx360 M3 system board. The Intel 5520 (Tylersburg) IOH chipset provides the interface between the processors, and the PCI Express buses that interface to the ICH10 South Bridge. The ICH10 in turns interface with the IMM, the optional mini-RAID connector, SATA ports, and USB busses.
{DESCRIPTION} This screen displays a front view image of the dx360 M3 LGA 1366 socket, processor, and close-up on the CPU align notch. {TRANSCRIPT} The dx360 M3 has two Intel land grid array (LGA) 1366 sockets, also known as Socket B that is used as the physical interface for Intel Xeon processors. Unlike the pin grid array (PGA) interface found on most AMD and older Intel processors, there are no pins on the chip; in place of the pins are pads of bare gold-plated copper that is soldered on the system board. The advantage of this architecture is that it is now the system board that has the pins, rather than the CPU. The risk of bent pins since the pins are spring-loaded and locate onto a surface, rather than into a hole. Also, the CPU is pressed into place by a "load plate", rather than human fingers directly. The installing technician lifts the hinged "load plate", inserts the processor, closes the load plate over the top of the processor, and press down a locking lever. To prevent damage make sure that the alignment notches on the CPU matches the align tab on the socket. The pressure of the locking lever on the load plate clamps the processor's 1366 gold-plated copper contact points firmly down onto the system board’s 1366 pins, ensuring a good connection. The load plate only covers the edges of the top surface of the CPU. When installing both processors make sure that CPU 1 and 2 are identical (number of cores, cache size and type, clock speed, internal and external clock frequencies).
{DESCRIPTION} This screen displays a front view image of the CPU heat sink and dust cover and heat sink filler. {TRANSCRIPT} With each CPU installation a heat sink cooling device is required. The heat sink is placed on top of the CPU and secured by four screws to the system board. If an optional CPU is not installed, a CPU dust cover and heatsink filler must be installed in that CPU socket. The dust cover will help prevent dust from falling down in to the pins on the system board that could affect your processor performance, and the CPU heatsink filler is required to balance air flow impedance.
{DESCRIPTION} This screen provides a single processor topology of the dx360 M3 memory subsystem featuring three DIMM channels with three memory DIMMs in each channel. {TRANSCRIPT} The dx360 M3 system board supports registered double data rate III (DDR3) LP (low-profile) DIMMs and provides Active Memory features, including advanced Chipkill memory protection, for up to 16X better error correction than standard error-correction code (ECC) memory. In addition to offering triple the memory bandwidth of DDR2 or fully-buffered memory, DDR3 memory also uses less energy. DDR2 memory already offered up to 37% lower energy use than fully buffered memory. Now, a generation later, DDR3 memory is even more efficient, using 10-15% less energy than DDR2 memory. The dx360 M3 supports up to 256GB of memory in 16 DIMM slots using 2GB, 4GB, 8GB or 16 GB (registered DIMM) RDIMMs. The dx360 M3 also supports either standard 1.5V DIMMs or 1.35V DIMMs that consume 10% less energy.
{DESCRIPTION} This screen provides a single processor topology of the dx360 M3 memory subsystem featuring three DIMM channels with three memory DIMMs in each channel. {TRANSCRIPT} Redesign in the architecture of the Xeon 5600 and 5500Series processors bring radical changes in the way memory works in these servers. For example, the Xeon 5600 and 5500 Series processors integrate the memory controller inside the processor, resulting in two memory controllers in a 2-socket system. Each memory controller has three memory channels. Depending on the type of memory, population of memory, and processor model, the memory may be clocked at 1333MHz, 1066MHz or 800MHz. For each CPU, a minimum of two DIMMs must be installed. The system board tray support three single-ranked and dual-ranked DIMMs per channel or two quad-ranked DIMMs per channel. Additional DIMMs may be installed one a time as needed. However, when populating DIMM slots using quad rank DIMMs, only 12 DIMM slots are supported. A DIMM or DIMM filler must occupy each DIMM socket before the server is turned on. Each CPU has its own memory DIMM bank. If only one processor is installed, only the first eight DIMM slots can be used. Adding a second processor not only doubles the amount of memory available for use, but also doubles the number of memory controllers, thus doubling the system memory bandwidth. If you add a second processor, but no additional memory for the second processor, the second processor has to access the memory from the first processor “remotely,” resulting in longer latencies and lower performance. The latency to access remote memory is almost 75% higher than local memory access. So, the goal should be to always populate both processors with memory. It is important to populate all three memory channels in each processor. The relative memory bandwidth decreases as the number of channels populated decreases. This is because the bandwidth of all the memory channels is utilized to support the capability of the processor. So, as the channels are decreased, the burden to support the requisite bandwidth is increased on the remaining channels, causing them to become a bottleneck. If 1.35V and 1.5V DIMMs are mixed, all DIMMS will run at 1.5V. If Chipkill and non-Chipkill DIMMS are used, all memory will run in non-Chipkill mode.
{DESCRIPTION} This screen displays images of a simple-swap and hot-swap hard disk drives, and list supported disk controllers for the dx360 M3 disk subsystem. {TRANSCRIPT} All iDataPlex models includes an integrated six-port SATA II controller. This controller supports up to five (depending on the configuration) internal simple-swap (SS) SATA II drives, or 4 SS SSDs. Hot-swap SAS or SATA HDDs, or simple-swap SAS HDDs, require an optional adapter. The integrated 3Gbps (x4 PCIe) ServeRAID-BR10il v2 controller offers hardware RAID-0/1/1E support (no cache) for up to 4 HDDs or SSDs. The 6Gbps (x8 PCIe) ServeRAID-M1015 SAS/SATA controller supports RAID-0/1/10 (no cache) for up to 16 drives (limited by available bays). The IBM ServeRAID M1000 Series Advance Feature Key adds RAID-5 with SED support. The 6Gbps (x8 PCIe) ServeRAID-M5014 SAS/SATA controller offers enhanced performance with 256MB of cache memory, and supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). The 6Gbps (x8 PCIe) ServeRAID-M5015 SAS/SATA controller offers enhanced performance with 512MB of cache memory and battery backup, and supports RAID-0/1/10/5/50 for up to 16 drives (limited by available bays). The IBM ServeRAID M5000 Series Advance Feature Key adds RAID-6/60 with SED support to the M5014 and M5015. The IBM ServeRAID M5000 Series Battery Key adds battery backup support to the M5014. The ServeRAID controllers provide SAS data transfer speeds of up to 3Gb per second 8 in each direction (full-duplex), for an aggregate speed of 6Gbps. The serial design of the SAS bus allows maximum performance to be maintained as additional drives are added. Both controllers support either SAS or SATA, hot-swap or simple-swap, 3.5-inch or 2.5-inch drives. However these drives cannot be intermixed. All drives must be the same type, the same physical size, and use the same interface. Note: SATA II drives also operate at a data transfer speed of up to 300MB per second (but in halfduplex mode). This throughput is similar to that of Ultra320 SCSI, with lower latency.
{DESCRIPTION} This screen display images of a simple-swap and hot-swap hard disk drives, and provide bullet highlights of the dx360 M3 disk subsystem. {TRANSCRIPT} The iDataPlex nodes offer a wide array of flexible storage options as shown here, supporting between 1 to 12 (3.5-inch) hot-swap SAS or SATA drives or up to 2 to 8 (2.5-inch) simple-swap SAS or SATA hard disk drives, or up to 8 2.5-inch solid-state drives (SSDs), offering high-performance with high availability from 50GB to 24TB of storage per chassis, depending on the chassis used and the configuration. The 2.5-inch drives consume approximately half the power of 3.5-inch drives. 2.5-inch solid-state drives use approximately 1/5 the power of 2.5-inch HDDs, with triple the reliability and higher read performance than HDDs.
{DESCRIPTION} This screen displays images of the NVIDIA Tesla M2050, NIVIDIA M1060, and NVIDIA Quadro FX3800 adapters. {TRANSCRIPT} The dx360 M2 introduced support for graphics adapters back in 2009 with the NVIDIA Quadro FX3800, and the dx360 M3 has continued to evolve with newer adapter capabilities with the NVIDIA Tesla M1060, Tesla M2050 followed by the new Tesla M2070 and M2070Q as part of the iDataPlex solution. The NVIDIA Quadro FX3800 has 192 Compute Unified Device Architecture (CUDA) cores for parallel computation and dedicated 1 GB of GDDR3 memory onboard. The 256-bit memory interface allows for a total memory bandwidth of 51.2 GB per second. It is a single-wide PCIe card and its maximum power consumption is 108 watts. The NVIDIA Tesla cards have one Tesla GPU onboard that implements NVIDIA’s Fermi architecture. It is the first implementation of GPUs with the sole purpose of accelerating your application by using the general purpose GPU model (GPGPU). Primary areas are simulations in many different areas that rely heavily on floating-point calculations. Note: Two M1060, M2050, or M2070/M2070Q GPUs can work together on a common workload for double the performance.
{DESCRIPTION} This screen the Tesla T10 series processor inside for the Tread Processor (TP) and the Thread Processor Array (TPA). {TRANSCRIPT} Let’s take a look at the technical perspectives of the latest GPU adapters – starting with the NVIDIA Tesla M1060. The Tesla T10 GPU contains 30 Thread Processor Arrays or TPAs for a total of 240 Thread Processors or “cores”. The M1060 has a 512-bit memory interface to 4 GB of GDDR3 memory with a maximum bandwidth of up to 102 GB per second. The M1060 is a double-wide PCIe card and has a maximum power consumption of about 190W. It provides up to 933 Gflops single-precision floating-point performance (peak) or 78 Gflops double-precision (peak). NVIDIA Tesla M1060 delivers supercomputing performance while requiring less power and space. Featuring the revolutionary NVIDIA CUDA parallel computing architecture and powered by 240 parallel processing cores. The Tesla M1060 shatters the performance per watt expectations to help you solve the toughest computing problems faster.
{DESCRIPTION} This screen displays a close image of the NVIDIA Tesla M2050 and the Tesla M2070/M2070Q adapter. {TRANSCRIPT} The Tesla M2050 and Tesla M2070 and M2070Q computing processor board shown here conform to the PCI Express, double-wide, full height (4.376 inches by 9.75 inches) form factor computing module based on the NVIDIA Fermi GPU. This module comprises a computing subsystem with a GPU and high speed memory. The Tesla 20 Series image is shown without the bracket that is located to your lower right. The vented bracket is shipped standard with all adapters.
{DESCRIPTION} This screen displays 3-D images of the dx360 M3. {TRANSCRIPT} The 3-D drawings illustrates the internals of the new configuration for I/O featuring two GPUs. To your left is the 3 slot riser, that contains 2 full x16 slots on the top, one on either side, and located at the bottom a x8 slot for high bandwidth adapter, enabling clients the flexibility and the performance and the number of I/O slots that they demand in tomorrow's workloads. The NVIDIA GPU adapters interface to the dx360 M3 through the industry standard PCI-E bus which allows GPUs to quickly and easily integrate into standard servers configurations. All NVIDIA PCI Express adapters with on-board GPUs require a x16 mechanical PCIe slot for installation. The NVIDIA PCIe adapters that are used to connect external GPU enclosures to the dx360 M3 system come in both x16 and x8 interface, although the x16 PCIe adapter is preferred due to performance. PCIe cables is required to connect the NVIDIA PCIe adapters to an NVIDIA external GPU enclosure. When installing the NVIDIA adapter ensure it is completely and evenly inserted into the PCIe slot and all of the retention mechanisms of the system should be used and verified to make sure the card is held in place firmly.
{DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} In the Tesla T20 or Fermi architecture, each streaming multiprocessor or SM (like the Thread processor array in the T10 series) has 448 Compute Unified Device Architecture (CUDA) cores (or thread processors) -- four times as many as the previous Tesla GPU architecture. All 448 cores share the common resources of their streaming multiprocessor. The GPU consists of hundreds of cores that are extremely good at sharing data among themselves, and they can collaborate and get a task done really fast. That's why the GPU cores are more effective at running applications which have high mathematical computation and high data throughput. The GPU also supports a lot of memory - shared memories, constant caches, texture caches, and newly added the L1 and L2 caches. Also, in order to make the cores in the GPU much more accessible and easily available to programmers, the GPU has a thread schedule, the NVIDIA Giga Thread. This essentially enables a programmer to just launch millions of threads and then the GPU thread scheduler takes care of actually managing the threads and scheduling them on the cores. We will skip the ECC feature for now.
{DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} Double precision arithmetic is at the heart of numerically intensive HPC applications such as linear algebra, numerical simulation, and quantum chemistry. The T20 architecture has been specifically designed to offer unprecedented performance in double precision; up to 16 double precision fused multiply-add operations can be performed per SM, per clock, a dramatic improvement over the Tesla T10 architecture. T20 improves on the scheduler in previous GPU architectures by issuing two instructions per clock cycle instead of one. Each streaming multiprocessor can manage 48 warps of 32 threads each for a total of 1,536 active threads of execution. With 14 streaming multiprocessors, a T20-class GPU can handle 21,504 parallel threads. Using this elegant hierarchical model of instruction issuing, T20 achieves very high efficiency. The M2050, M2070, and M2070Q generate up to 515 gigaflops double-precision (1,030 gigaflops single-precision) peak performance. This is all IEEE compliant; in fact, it is compliant to the 754 2008 standard, which is the latest standard, and it's powered by a fuse multiply add. A fuse multiply add is a highly accurate mathematical operation because it doesn't have intermediate results which are rounded. What this ends up being is a processor which is extremely valued by high performance computing customers who want to run very high precision and very computationally intensive applications.
{DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} The 2nd part of this architecture that is extremely important is the memory. Besides the L1 and L2 caches, NVIDIA has always had a shared memory. In this diagram, the shared memory has increased from 16KB to up to 48KB, with the addition of an L1 cache, which is again among 32 cores, and an L2 cache. The architecture itself is a dual issue architecture. This means that you can issue instructions from 2 different threads at the same time. This action leads to more flexibility for the compiler to find parallelism in the core. And really the cache is really held in non-uniform access and applications that have non-uniform access. So anything like finite element analysis, any CAE application, raid tracing, sparse matrix multiplication: all of these benefit greatly by the cache hierarchy.
{DESCRIPTION} This screen the Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} The Giga Thread scheduler, as I mentioned earlier, helps in enabling the GPU to take care of thread scheduling. For example, programmers are doing a matrix multiplication. They are multiplying 2 one million by one million matrices; which have to launch one million threads, each of which does a single multiplication between 2 elements of the 2 matrices. However for the hardware, the GPU, actually takes care of scheduling these threads on the cores, and actually takes care of any kind of dependencies or conflict between the thread. The other added feature in the Fermi architecture is the ability to do concurrent kernel executions. What this means is you can essentially launch multiple functions or multiple tasks on the GPU. These tasks are scheduled in parallel where possible by the GPU hardware. Secondly, a new DMA engine was added to the Fermi architecture. In the past, the GPU could communicate with the CPU over a single bi-directional bus; now there are 2 bi-directional buses and 2 bi-directional DMA engines which enable overlapping load, computation, and store them at the same time between the GPU and the CPU.
{DESCRIPTION} This screen the displays Tesla T20 series processor inside CUDA Core. {TRANSCRIPT} Finally, there is the Error Correction Code (ECC) support; as I mentioned earlier, this is a first for any GPU architecture. It provides full ECC, it detects and corrects a single error, and it detects and flags a double error. This is a really important feature for 40 nanometer technologies and Fermi is a 40 nanometer implementation. The ECC implementation protects both the internal register files and shared memories, and L1 and L2 caches, and the external memory on the GPU board, which is connected by the GDDR5 interface. This is an extremely important differentiator for NVIDIA's GPUs.
{DESCRIPTION} This screen the displays a chart that compares the Tesla T20 series board configurations. {TRANSCRIPT} There is only one configuration for the Tesla M2050 and Tesla M2070. Notice that the specifications only differs with the memory – everything else is the same: The Tesla M2050 module offers 3 GB of GDDR5 memory on board. The Tesla M2070 and Tesla M2070Q module offers 6 GB of GDDR5 memory on board. Both of these products can be configured by the OEM or by the end user to enable or disable ECC or error correcting codes that can fix single-bit errors and report double-bit errors. Enabling ECC will cause some of the memory to be used for the ECC bits, so the user available memory will decrease to approximately 2.62 GB for a Tesla M2050 and approximately 5.25 GB for a Tesla M2070 and Tesla M2070Q. With the Tesla M2070 and M2070Q adds more memory for GPU computing with the same cooling and power at lower price point. In addition the Tesla M2070Q adds Quadro sw (Quadro Software) for professional graphics visualization. The Tesla M2070Q GPU combines Tesla’s high performance computing and the NVIDIA Quadro® professional-class advanced visualization in the same GPU. What this means is that Tesla M2070Q is capable of visualization type applications in addition to it's high performance computing acceleration capabilities. An example of a Quadro software application that the Tesla M2070Q is capable of is Microsoft's RemoteFX remote visualization app. The standard Tesla M2070 does not support Microsoft's RemoteFX. Tesla M2070Q is the ideal solution for customers, who want to deploy high performance computing, advanced and remote visualization in a datacenter.
{DESCRIPTION} This screen displays images of the QLogic PCIe and CNA, Emulex PCIe HBA, Brocade PCIe and CAN, and IBM High IOPS SS Class SSD HBA. It also displays a table that provides a list of the newly supported host bus adapter (HBA) cards and optional RAID cards. {TRANSCRIPT} The System x iDataPlex dx360 M3 provides I/O flexibility and offer potential investment protection by supporting high-performance PCIe Host Bus Adapter cards (HBAs), such as 10Gb Ethernet, Fibre Channel, InfiniBand and GPU cards. The PCI Express (PCI-E), is a high performance, general purpose I/O interconnect used for a variety of computing and communication platforms. It maintains key PCI features, but is a fully serial interface rather than the parallel bus architecture found in the conventional PCI. PCI-E can be used for universal connectivity as a chip-to-chip interconnect, I/O interconnect for adapter cards, or an I/O attach point to other interconnects. Depending upon the configuration, dx360 M3 supports up to three high-speed PCIe adapter slots per chassis through the use of a riser card. There are five different riser cards available for iDataPlex: 1U single-slot for front PCIe slot; used for installation of one PCIe card. Supported in all configurations 2U two-slot for front PCIe slot; used for installation of two PCIe cards. Any two adapters are support, with a maximum of one GPU or GPGPU adapter 2U three-slot for front PCIe slot, with dx360 M3 only; used for installation of two GPU or GPGPU adapters and any other PCIe card. The third PCIe slot is on the backside of the riser card. The card has a PCIe switch onboard and requires a separate power cable. Supported only in 2U I/O-rich configurations using the PCIe tray 1U single-slot for rear PCIe slot; used for installation of any PCIe storage controller card. Supported in all 2U configurations when using the dx360 M3 2U single-slot for rear PCIe slot; used for installation of any PCIe storage controller card. Supported in 3U configurations only
{DESCRIPTION} This screen displays a rear view image of the dx360 M3 and an image the 550W/750W/900W power supplies. {TRANSCRIPT} Each iDataPlex chassis includes a power supply option and low power-consuming fans that provide operating power and cooling for all components within the chassis. The dx360 M3 offers power supply flexibility with a Higher Efficiency 550 watt non-redundant power supply for lower-power grid deployments, or an optional Higher Efficiency 900 watt power supply for non-redundant requirements, capable of reducing power consumption up to 8%, or with two separate 750 watt AC in to 12VDC power supplies to create an N+N configuration. *Note that the 750W N+N power supply actually runs at 900W when both sides are working properly and only drops to 750W if one of the sides drops off. With the redundant power option, customers can still take advantage of all the optimization for software-resilient workloads, and now take advantage of iDataPlex efficiency for non-grid applications where they desire. The 550 watt supply is in the same form factor as the 900W non-redundant supply, with 2 discrete supplies inside the container that are bussed together and 2 discrete line feeds to split power to separate PDU’s. Note, when deploying a full rack of redundant power will require doubling the PDU count, but the vertical slots in the iDataPlex rack can easily accommodate these. Whether the customer’s requirement is line feed maintenance, node protection, or just increased reliability, iDataPlex can now deliver a solution.
{DESCRIPTION} This screen displays a rear view image of the dx360 M3 with fan unit slightly removed. It also displays the 4 fan unit. {TRANSCRIPT} The chassis fan assembly is comprised of four large 80mm fans per 2U or 3U chassis for more efficiency and lower noise than the eight small 40mm fans used in standard 1U servers. The fan assembly is also non-redundant and shared between the two chassis elements (that is, server-server or server-storage enclosures). The fans cannot be replaced individually because the fan assembly is a single unit. The chassis must be removed from the rack in order to service or replace the fan assembly or power supplies. The power supply will provide its own fan(s) for cooling and control the system fans for system cooling. The power supply used in each iDataPlex chassis can be more than 92% efficient depending on the load, and consume 40% less power when compared to a traditional 1U server. The fans shared between two nodes in a Compute Chassis are 70% more efficient in terms of power consumption, compared to those in a traditional 1U server. The iDataPlex uses Direct Dock Power to power the nodes in the chassis. Direct Dock Power allows the chassis to be inserted without having to connect power cables. With the innovation of Direct Dock Power, you can simply push the chassis into the rack, and the power supply will connect to the PDU. You do not have to access the rear of the rack when installing or working with servers. The chassis in turns use industry standard power cords that are attached to the iDataPlex rack. When the chassis is installed in an iDataPlex rack, it is automatically connected to power through a PDU that is mounted to the rack rail—eliminating the need to access the rear of rack to attach the power cord.
{DESCRIPTION} This screen lists the topic agenda and a 3-D image of a hand releasing a ball. {TRANSCRIPT} Our final topic is on iDataPlex Management.
5. {DESCRIPTION} This screen displays images of the systems management stack: Tivoli software, IBM systems Director, ToolsCenter, IMM, and UEFI. {TRANSCRIPT} The new generation of IBM System x iDataPlex servers offers a high level of systems management capabilities which is a complete end-to-end stack designed to deliver future-proof management today. It begins with hardware and firmware, featuring the Integration Management Module (IMM) and the Unified Extensible Firmware Interface (UEFI) introduced with out new generated Intel-equipped servers in March 2009. It then builds upon that with our IBM’s ToolsCenter consolidation of tools with some additional important capabilities. On top of that is our advanced management software, IBM Systems Director which allows the servers to be managed either locally or remotely and manages both physical and virtual systems. Systems Director comes standard, is simple to start, allows for an automated, fast deployment, provides a single interface for the entire infrastructure, and seamlessly plugs into many existing enterprise management solutions. At the very top you have IBM Tivoli software enterprise level server management with IBM Tivoli or others. IBM iDataPlex servers also features Dynamic System Analysis, Automatic Server Restart, Wake on LAN® support, and PXE support. To include support for Moab cluster Suite, an intelligent management middleware that provides simple Web-based job management, graphical cluster administration and management reporting tools, and xCat (which stands for Extreme Cluster Administration Toolkit), an open source Linux/AIX/Windows Scale-out Cluster Management Solution.
{DESCRIPTION} This screen lists IBM Business Partner’s icons, and four images of IBM System x systems: HS22 blade, 42U rack with console, IBM BladeCenter S chassis and iDataPlex 100U rack. {TRANSCRIPT} IBM needs partners to be successful and deliver the solutions customers are looking for. We believe open standards benefits our clients and is a major driver of IT innovation and integration. We will continue to engage our partners on innovative concepts like cloud computing, on driving open management standards, and on academic initiatives that will prepare the next generation of IT professionals. And at a client level, we work closely with our partners to deliver full solutions to our client’s needs.
{DESCRIPTION} This screen displays a right-align view image of the IBM iDataPlex dx360 M3 100U rack with a data center switch being highlighted from the rack. {TRANSCRIPT} There are a wide range of switches to select from based on your computing needs.
{DESCRIPTION} This screen displays a image of one of the Despicable Me characters. {TRANSCRIPT} This is just one of iDataPlex many customer success stories. Illumination Entertainment collaborated with Mac Guff Ligne, a Paris-based digital production studio, to complete the 12 months of intensive graphics and 3-D animation rendering, amounting to up to 500,000 frames per week. To complete the project, the team needed to quickly design and build a dedicated server farm capable of meeting these demanding workloads across its 330 person team of artists, producers and support staff. The production team also needed efficient space, and an IT solution that was easy to configure, manage and expand. To avoid the potentially high air conditioning costs associated with operating a data center 24 hours a day, 7 days a week, the company also wanted an energy efficient technology platform. Illumination tapped IBM and its Paris-based Business Partner Serviware to build a server farm based on IBM's iDataPlex system. With this system's efficient design and flexible configuration, the company was able to meet the intense computing requirements for the film and save room by doubling the number of systems that can run in a single IBM rack. The entire space used to house the data center amounted to four parking spots in the garage of the production facility, about half of what had initially been allotted. The studio's iDataPlex solution included IBM's innovative Rear Door Heat eXchanger that allows the system to run with no air conditioning required, saving up to 40% of the power used in typical server configurations. Overall, the installation included 6,500 processor cores."
{DESCRIPTION} This screen lists iDataPlex position among the Top500 list. {TRANSCRIPT} System x iDataPlex continues to prove leadership across SuperComputer deployments as shown in the latest Top500 list.
{DESCRIPTION} This screen displays iDataPlex awards. {TRANSCRIPT} The slide lists iDataPlex’s 2009 – 2010 awards and achievements, among these awards IBM iDataPlex dx360 M3 was cited the 2010 Reader’s Choice: Best HPC Server Product or Technology during the 2010 Supercomputing Conference, held in New Orleans, La. The annual awards are highly coveted as prestigious recognition of achievement by the HPC community.
{DESCRIPTION} This screen lists the topic summary. {TRANSCRIPT} Having completed this Topic you should now be able to: List three emerging technologies for an iDataPlex solution List three goals that iDataPlex addressed Identify elements of the iDataPlex rack design Match the server offering to its characteristics
{DESCRIPTION} This screen identifies abbreviations and acronyms used in the topic. {TRANSCRIPT} Presented is a glossary of abbreviations and acronyms used in this topic.
{DESCRIPTION} This screen displays html links. {TRANSCRIPT} Listed are some additional resources that will help you learn more about the IBM System x iDataPlex solution.
{DESCRIPTION} Displays the statement of “End of Presentation” in the center of the slide. {TRANSCRIPT} Thank you for participating. This concludes this topic.