SlideShare una empresa de Scribd logo
1 de 23
Descargar para leer sin conexión
u




  Driving Business Value on
  Power Systems with Solid
  State Drives




April 2009


By Lotus Douglas, Qunying Gao, Lilian Romero,
Linton Ward, and David Whitworth
IBM Systems and Technology Group

Sunil Kamath
IBM Software Group, Information Management

Jim Olson
IBM Integrated Technology Delivery
Executive Summary

       Solid State Drives (SSDs) offer a number of advantages over traditional hard disk drives
       (HDDs). With no seek time or rotational delays, SSDs can deliver substantially better I/O
       performance than HDDs. Capable of driving tens of thousands of I/O operations per
       second (IOPS), as opposed to hundreds for HDDs, SSDs break through performance
       bottlenecks of I/O-bound applications. Applications that require hundreds of HDDs can
       meet their I/O performance requirements with far fewer SSDs, resulting in energy, space,
       and cost savings.

       To demonstrate the benefits of SSDs, we ran experiments comparing SSDs with HDDs.
       The experiments showed a significant performance advantage with SSDs which resulted
       in a substantial reduction in the number of drives needed to meet the desired level of
       performance. Fewer drives translate into a smaller physical footprint, reduced energy
       consumption, and less hardware to maintain. The experiments also showed better
       application response times for SSDs, which leads to increased productivity and higher
       customer satisfaction.

       Solid state drive (SSD) technology was introduced more than three decades ago. Until
       recently, however, the high cost-per-gigabyte and limited capacity of SSDs restricted
       deployment of these drives to niche markets or military applications. Recent advances in
       SSD technology and economies of scale have driven down the cost of SSDs, making
       them a viable storage option for many I/O intensive enterprise applications.

       While the cost of SSDs is trending downward, the $/GB for SSDs is still substantially
       higher than that of HDDs. It is not cost-effective or necessary to replace all HDDs with
       SSDs. For instance, infrequently accessed (cold) data can reside on lower cost HDDs
       while frequently accessed (hot) data can be moved to SSDs for maximum performance.
       The appropriate mix of SSDs and HDDs should be used to strike a proper balance
       between performance and cost.

       This paper provides information to enable you to integrate SSDs into your storage
       infrastructure so that you can immediately take advantage of SSDs to improve your
       application performance and increase productivity. We describe how to deploy SSDs in a
       tiered storage environment to allow you to leverage your existing storage with SSDs for
       maximum performance and minimum cost. The paper also discusses IBM tools and
       services available to assist you in deploying and managing a storage solution with SSDs.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                               Page 2 of 23
Leveraging SSDs in Tiered Storage Pools

       Many storage environments have grown to support a diversity of needs and evolved into
       disparate technologies that have lead to storage sprawl. In a large-scale storage
       infrastructure this yields a sub-optimal storage design that can be improved with a focus
       on data access characteristics analysis and management.

       Tiered storage is an approach of utilizing different types of storage throughout the
       storage infrastructure. It is a mix of higher performing/higher cost storage with lower
       performing/lower cost storage and placing data accordingly based on specific
       characteristics such as performance needs, age and importance of data availability.
       Properly balancing these tiers leads to the minimal cost – best performance solution.

       The focus of this paper is on the active, mission critical data. Typically this is regarded as
       Tier 1 storage. SSDs can be considered as a new Tier 0 for the fastest active data.




         Cost versus Performance


                        Tier0
                      Ultra High
                     Performance


                                               Tier1
                                         High Performance
                                          Mission Critical

                Cost                                                Tier2
                Per                                                Medium
                Gigabyte                                         Performance
                                                              Non-Mission Critical           Tier3
                                                                                       Low Performance
                                                                                         Archival/Tape




                                                      Performance

       Figure 1: Tiered Storage Environment


       An example of an existing storage environment is shown in Figure 1. The design results
       in a significantly increased cost associated with maintaining and supporting the
       infrastructure. In addition to the immediate effect associated to this balance, growth
       continues at an increased rate in the higher cost area of Tier 1. Thus, as the growth
       occurs, the distribution of data would continue to grow in a non-optimal direction unless
       there is careful planning and discipline in deployment.


Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                   Page 3 of 23
Typically, an optimal design would keep the active operational data in Tier 0 and Tier 1
       and leverage Tiers 2 and 3 for less active data. An example is shown in Figure 2.

       The benefits associated with a Tiered storage approach are simple; it is all cost related.
       This approach will save significant cost associated with storage itself, as lower Tiered
       storage is less expensive. Beyond that, there are the environmental savings, such as
       energy, footprint, and cooling reductions.


                         Storage Pyramid


                             1-3%         Tier 0: Ultra high performance applications

                           15-20%            Tier 1: Mission critical, revenue generating apps

                           20-25%               Tier 2: Meet QoS for non-mission critical apps

                           50-60%                   Tier 3: Archives and long term retention


         Figure 2: Storage Pyramid

       How to implement Tiered Storage
       There are three areas of interest critical to implementing, maintaining and leveraging a
       tiered storage solution. These areas are software tools for identification and reporting of
       all components of the tiered storage solution, virtualization to enable control and
       allocation of your solution, and offerings that are designed to provide alignment with your
       specific needs for IT governance.

       Tivoli Productivity Center (formerly Total Storage Productivity Center) is a perfect
       example of software needed to execute data identification for implementation and
       management. It provides the capability to zero in on data characteristics that can be used
       to make choices on data placement in implementation and steady state.

       SAN Volume Controller enables virtualization for your storage environment. Virtualization
       is critical to maintaining a tiered storage solution as it provides the capability for your
       administrators to relocate data between tiers of storage without impacting the application
       and customer service levels. Virtualization allows you to leverage the tiered storage
       solution to provide the required flexibility for a dynamic infrastructure.

       IBM Novus Intelligent Storage Service Catalog (ISSC) offering is a single framework
       aimed at providing storage optimization through more efficient provisioning, better
       analytics of the storage environment and proper alignment of data to storage tiers.
       The intellectual capital which comprises ISSC is IBM’s Intelligent Storage Service
       Request (ISSR), Process Excellence, and Storage Enterprise Resource Planner
       (SERP).Through detailed interviews with the client, IBM is able to obtain a detailed
       understanding of the customer’s business requirements. ISSR promotes "right-tiering"
       and "right-sizing" of storage provisioning based on these business requirements acting as
       a front end interface for storage requests. Upon receipt of the ISSR, Process Excellence
Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                 Page 4 of 23
is utilized by the storage administrator to ensure that proper process and procedure are
       utilized at all times to eliminate costly errors or unknown challenges created by lack of
       standardization. In addition, use of Novus’s SERP software solution can provide very
       specific data characteristic information that when combined with the customer
       discussions, can result in a method of more effectively deploying and managing a tiered
       storage solution.



Leveraging SSDs for a High Value Database

       Improving the response time of some database environments can yield a substantial
       benefit to business results. While a tiered storage strategy focuses on reducing the
       operational costs, some environments can leverage the improved I/O performance that
       SSDs provide. Further, beyond the benefits of improved performance, other implied
       benefits such as infrastructure simplification, ease of storage management, and reduced
       need for fine tuning skills are paramount and result in substantial IT efficiency and
       reduced costs.

       Storage management, performance, and cost are big issues in the database world.
       Database workloads, both transactional and data warehousing typically require lots of
       HDDs for I/O performance – both IOPS and bandwidth. Traditional enterprise HDDs,
       including the 15K RPM HDDs are limited by the rate of head movement and deliver
       random I/O performance of approximately 150 -175 IOPS with a latency of about 5 -7
       msecs and sequential scan bandwidth of about 30 - 60 MB/sec for most database
       workloads. Write-intensive batch jobs are under pressure to complete within the
       increasingly shrinking time-window leading to reduced up-time for transactional database
       systems.

       In addition, maintenance jobs such as backup, restore, and database crash recovery
       which can induce too much pressure on I/O are also time critical and important to the
       business to maintain a highly operational database system. Backup operations tend to
       drive high levels of sequential I/Os while recovery processes drive high levels of random
       I/O. In many customer environments, to maintain the high IOPS rate required to service
       applications with reasonable response times, less data is placed on HDDs resulting in
       poor IOPS per gigabyte of available storage capacity. This implies that a lot of capacity
       on HDDs (greater than 50% in most cases) is wasted or under-utilized and the situation
       has only worsened with larger density HDDs.

       SSDs offer game-changing performance for database applications by removing the
       limitations traditional rotating disks impose on database design. This will revolutionize
       database architectural design by removing the traditional I/O bottleneck. SSDs eliminate
       the need to have a large number of under-utilized (short-stroked) HDDs to meet the
       heavy I/O demands of database applications.


Customer Scenarios that can Benefit from SSDs

       A broad spectrum of industries from the financial sector to the consumer service industry,
       including government, with varied or common business challenges can benefit from SSD
       technology. These businesses at a fundamental level rely on improved responsiveness
       from their critical transactional, Customer Relationship Management (CRM) or data
       warehousing solutions that enable them to service their clients faster and react to
       changes and new opportunities more rapidly, resulting in improved profitability and
       increased revenue. With an explosion of data volumes and a need to convert them into
Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                Page 5 of 23
trustable information with speed, SSDs help enable IT to address the critical storage
       challenges to satisfy business needs.

       The following business scenarios represent a few cases where SSD technology can
       deliver significant value.

           •   Customer retention by servicing them with superior satisfaction. Enterprises that
               empower their customer support representatives to service their clients' needs in
               real time results in better customer loyalty.
           •   360 degree view of customer relationships that enables businesses to respond to
               market needs and more rapidly identify new opportunities
           •   Real time and fast fraud detection enables enterprises spanning financial,
               insurance, consumer services organizations, etc to improve profitability and
               facilitate better customer value.
           •   Faster reporting and business analytics capabilities empower organizations to
               deal with risk management in an efficient manner.
           •   Faster order processing systems where speed of transaction processing lead to
               increased revenue and customer satisfaction.

       To illustrate the type of workloads that benefit from SSD technology, two scenarios from
       real world customer applications are chosen. Figure 3 shows a workload profile from a
       large enterprise in the consumer products company running their global and mission
       critical SAP R/3 workload with DB2™ on Power Systems. Figure 4 illustrates a workload
       profile from a global financial institution running DB2 on Power Systems which services
       tens of thousands of transactions per second.

       The SAP R/3 workload is an 8 TB DB2 database that is hosted off a single IBM System
       Storage DS8100 disk system with 14 TB of usable capacity. The database is over
       provisioned by nearly 75%, primarily due to the need for IOPS from physical disk
       spindles. However as can be noted in Figure 3, the CPU is still about 30-40% waiting on
       I/O. These workloads can benefit from migrating the storage from HDDs to SSDs within a
       DS8100 which will reduce I/O wait, improve SAP transaction response time and save on
       storage costs by eliminating the need to over provision storage.



                                             CPU and I/O Profile
                                               User%   Sys%    Wait%
        100
         80
         60
         40
         20
          0
       Figure 3: CPU and I/O profile of SAP R/3 workload with DB2 on Power Systems



       The next scenario is from a large and global financial industry company. Figure 4
       illustrates the CPU profile of a 200 gigabyte DB2 database that is servicing tens of

Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                Page 6 of 23
thousands of transactions per second with hundreds of concurrent users. In this
       environment, the DB2 database was provisioned with a single DS4800 controller with two
       terabytes of RAID storage. This represents capacity that is ten times more than required
       to handle the I/O performance and therefore overall transaction response times. As can
       be noted from Figure 4, the CPU is still about 20% waiting on I/O. This scenario is
       another example where migrating to SSDs can result in consolidation of drives by up to
       10 x while further improving the transaction response times and handling large volumes
       of concurrent users.



                                           CPU and IO profile
                                            User%      Sys%     W ait%

        60


        50


        40


        30


        20


        10


         0

       Figure 4: CPU and I/O profile of a DB2 workload on Power Systems handling large
       volume of transactions


Quantifying Benefits of SSDs using an OLTP Workload

       On-line Transaction Processing (OLTP) applications are characterized by large numbers
       of users concurrently executing transactions against a database. OLTP applications
       typically have a significant amount of random I/O and a high number of physical reads
       with the goal of ensuring consistently low response times. Typical OLTP applications
       include banking applications, order processing systems, and airline reservation systems.
       OLTP transactions spend a great deal of time waiting on I/O. The I/O wait time is
       considerably longer for HDDs than SSDs due to delays inherent to HDD mechanical
       parts. SSDs are ideal for OLTP workloads since they do not have any seek or rotational
       delays and can process I/O faster.

       An SSD solution for OLTP applications can offer the following benefits:

             •   a substantial reduction in the number of drives required
             •   increased I/O and throughput performance
             •   a substantial reduction in response time
             •   a reduction in energy consumption
             •   reduced lab space requirement

       To characterize the benefits of SSDs for transactional workloads, an in-house OLTP
       database application was chosen. For our experiments, the application characteristics
       were 60% random reads and about 40% random writes. The system configuration was as
       follows:

Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                              Page 7 of 23
Server         Power 550 Express Model 8204-
                            E8A with 128GB of memory
                               ™
             OS             AIX 61 TL2
             Database       DB2 9.5 FP3


       In total, three experiments were conducted by using different types of storage. For the
       base configuration, the entire database was placed on HDDs using a total of 800 drives
       in order to meet the response time requirements. The database was built using RAID5
       where the tables containing the highest number of IOPS (hot data) were spread across
       720 FC HDDs and the remaining tables (cold data) were spread across 80 SAS HDDs.
       The configuration is shown in Figure 5.



        Base Configuration using 800 HDDs




                720 x 15K RPM FC HDDs                      80 x 15K RPM SAS HDDs
                      (Hot Data)                                 (Cold Data)

       Figure 5: Base Configuration Using 800 HDDs


       In the second experiment a total of 116 drives were used consisting of a mix of 36 SSDs
       and 80 HDDs. The hot database tables were placed on the SSDs and the colds tables
       remained on the 80 SAS HDDs. The 36 SSDs were placed in 6 EXP12S drawers. Each
       EXP12S was attached to a PCI-X DDR 1.5 GB Cache SAS RAID Adapter. A total of six
       5+P RAID5 arrays were created on the SSDs. The cache on RAID adapters can become
       a performance bottleneck for some workloads with this many SSDs on one adapter, so
       the RAID adapter cache was disabled for this experiment. The response times for these
       SSDs is so fast that the database saw very good response times on this workload even
       with the adapter's cache disabled. The configuration is depicted in Figure 6.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                               Page 8 of 23
Mixed SSD – HDD Configuration (Hot data moved to SSDs)




                      6 SAS adapters                      4 FC adapters
                         36 SSDs                    80 x 15K RPM SAS HDDs
                        (Hot Data)                         (Cold Data)


       Figure 6: Mixed SSD – HDD Configuration

       In the third and final experiment, a total of 116 drives were also used. The same number
       of HDDs (36) was used to hold the hot data as SSDs in the mixed storage configuration.
       Because of the price and performance differential, we do not expect that customers
       would do a one-to-one substitution of HDDs with SSDs. The experiment was designed to
       show a direct performance comparison between SSDs and HDDs. So, unlike the 800
       HDD experiment, we did not “short stroke” the HDDs in order to achieve better I/O
       performance. RAID5 was used for this experiment, as well. The configuration is shown in
       Figure 7.

        HDD Config with Storage Footprint = SSD




                      6 SAS adapters                      4 FC adapters
                 36 x 15K RPM SAS HDDs              80 x 15K RPM SAS HDDs
                         (Hot Data)                        (Cold Data)


       Figure 7: HDD Configuration with same Storage Footprint as SSD

       The end goal of all the experiments was to compare the response times, throughput,
       space usage, and energy consumption, using SSDs vs HDDs. The experiments were
       performed by executing a number of different types of OLTP transactions against the
       database and collecting performance statistics to understand the behavior of the drives.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                               Page 9 of 23
Results of Experiments
       800 HDDs vs. 116 Drives (mix of 36 SSDs and 80 HDDs)
       For the base configuration with 800 HDDs, the system CPU was 67% busy while the
       remaining 33% was spent waiting for I/Os to complete. The IOPS per drive for the drives
       holding the hot tables maxed out at about 220. In comparison, for the configuration using
       SSDs for the hot tables, the CPU utilization reached over 80% and the IOPS per drive
       was over 7000. With SSDs, I/Os were serviced much faster, resulting in better storage
       and application response times. In addition, more of the CPU was freed up to do useful
       work instead of waiting for I/O. As a result system throughput increased.

       Figure 8 shows the database transaction response times and relative system throughput
       for the 800-HDD and the mixed SSD-HDD configurations. The configuration with SSDs
       achieved a 1.72X improvement in application response times and a 1.65X improvement
       in transaction throughput over the 800-HDD configuration.

                                 Database Transaction Response Time & Throughput
                                 0.04                                              6
                                              SSD = 1.7X better transaction RT




                                                                                       Relatvie System Throughput
                                0.035
                                                                                   5
         Response Time (secs)




                                 0.03
                                                                                   4
                                0.025                         SSD = 1.65X better
                                                                 throughput
                                 0.02                                              3

                                0.015
                                                                                   2
                                 0.01
                                                                                   1
                                0.005

                                      0                                            0

                                                 HDD                  SSD

       Figure 8: SSD vs. HDD Database Transaction RT and Throughput


       The average drive read response times for the 800-HDD and mixed SSD-HDD
       configurations are shown in Figure 9. The drive read response time improved by 3X when
       the hot tables were moved to the SSDs (1.7ms for SSDs vs. 5.3ms for HDDs).


                                                Read Response Times
                                  6
                                          SSD = 3X better response time
                                  5
         Response Time (ms)




                                  4



                                  3



                                  2



                                  1



                                  0                                                                            Figure 9: SSD-HDD RT Comparison
                                                HDD             SSD




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                                                  Page 10 of 23
The efficiency of the drives was measured in terms of transaction throughput per drive.
       For the 800-HDD configuration the throughput per drive was 4.2 and for the SSD
       configuration it was 137.5. This means that each SSD performed 33 times more work
       than an HDD, as shown on Figure 10. This disparity in throughput per drive are due to
       the SSDs being faster than HDDs. Many more HDDs are required to achieve the same
       throughput as a small number of SSDs. Even with a 20:1 ratio of HDDs to SSDs, the 800-
       HDD configuration was still bottle-necked by I/O and could only achieve ½ the throughput
       of the SSD-HDD mixed configuration.

                                                  Throughput per drive
                                         160

                                         140
         Transaction Throughput




                                         120   SSD = 33X HDD
                                         100

                                          80

                                          60

                                          40

                                          20

                                          0
                                                                               Figure 10: SSD-HDD Throughput Comparison
                                                  HDD              SSD



       Moving the hot tables to SSDs reduced the number of physical drives required from 720
       to 36. This reduction resulted in space usage and energy consumption savings. Figure 11
       shows the energy usage is about 90% lower for SSDs. The energy usage was measured
       at peak throughput for all the storage components. The system energy and AC cooling
       energy were not included in the energy usage measurement. Figure 12 shows the space
       reduction between HDDs and SSDs is about 84%. The space was calculated using the
       total space used by the storage sub-system such as the controllers, drive enclosures,
       and drives.


                                                 Watts per throughput                              Space Usage
                                                                                           200
                                          8                                                      SSD = 84% less space
                                                                                           180
          Watts per transaction/second




                                          7
                                                 SSD = 90% less energy
                                                 usage                                     160
                                          6                                                140
                                                                         Rack Unites (U)




                                          5                                                120

                                                                                           100
                                          4
                                                                                            80
                                          3
                                                                                            60
                                          2
                                                                                            40

                                          1                                                 20

                                          0                                                 0

                                                  HDD              SSD                           HDD             SSD

       Figure 11: SSD-HDD Energy Usage Comparison                               Figure 12: SSD-HDD Space Usage Comparison




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                         Page 11 of 23
116 Drives (36 SSDs + 80 HDDs) vs. 116 Drives (36 HDDs + 80 HDDs)
       In this experiment the number of drives remained the same. Both the SSD configuration
       and the HDD configuration used 36 drives for the hot data and 80 drives for the cold data.
       The purpose of the experiment was to do a direct comparison of HDD and SSD
       performance in a high I/O demanding environment.

       Focusing the analysis on the most interesting subset of the 116 drive comparison, the 36
       HDD vs the 36 SSD which are both running the "hot" data (the tables with the highest
       amount of IOPS), the following observations were made:

       For the 36 HDD measurement, the CPU utilization was only 5% and the remaining 95%
       was spent either waiting on I/Os to complete or idle. The drive read response time was
       6.8 ms. The IOPS per drive was about 170. In comparison, the read respond time and
       IOPS per drive for the 36 SSD measurement was 1.7 ms and 7000, respectively. Figure
       13 shows both the relative response times and drive performance comparisons using 116
       disks.


                              SSD vs HDD: Relative Read Response Times


                                           SSD = 4X better
           Read RT                                                                 SSD
                                                                                   HDD

                     0            1            2        3        4        5




                              SSD vs HDD: Relative Drive Performance



           IOPS/drive
                                      SSD = 40X better                             SDD
                                                                                   HDD


          transactions/
              drive
                                      SSD = 42X better

                          0           10           20       30       40       50




       Figure 13: 116 (36 SSDs+80HDDs vs 36 HDDs+80HDDs) : RT & Drive Performance Comparisons




Determining Whether an AIX Application Might Benefit from SSDs

       AIX provides performance tools that can be used to determine if a configuration has hot
       data that would perform better if moved to SSDs. The most valuable tools for assessing
       data hot spots are the AIX tools iostat and filemon. In addition, database vendors also
       provide tools to analyze hot data.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                     Page 12 of 23
In order to demonstrate the capabilities of these tools, we will compare iostat and filemon
       data from the 800-drive HDD run and the 116-drive mixed SSD-HDD run. The data will
       show the I/O performance improvement gained from using SSDs.

       Identifying Hot Disks
       The iostat tool can provide a good first level I/O analysis because it provides a high level,
       real-time view of overall storage performance and is simple to run. To isolate the hot
       data, look for data that does a high rate of random small block I/O per GB to the HDDs.

       Running the command "iostat -t" provides CPU utilization details. If there is no I/O wait
       time, then SSDs will not improve system performance. As shown in Table 1, there was a
       substantial amount of I/O wait time for the 800-HDD experiment, so there is a big
       potential for performance improvement from using SSDs.

         Storage Configuration                         % iowait
         800-HDD experiment                             33.2
         116 drive mixed SSD - HDD experiment            1.9
       Table 1: SSD and HDD iowait Output

       Running iostat with the "-D" flag as shown in Tables 2 and 3, provides detailed output per
       logical disk (hdisk), including read and write response times. In order to focus on the
       hdisks with the hot data, only those that contain the hot data and the database logs
       (hdisk320-321) are shown below.

       There are several things to notice here:

           1. The total system storage I/O requests or transfers per second (tps) are shown at
              the top of each report. The tps is a total of reads per second (rps) and writes per
              second (wps). Note that the tps on the SSD run is about double the tps on the
              HDD run.

           2. HDDs max out at about 200 IOPS. So, look for hdisks that do over 200 IOPS (or
              tps) per physical drive.

               •   For the 80-HDD configuration, each hdisk consists of 30 physical drives:
                   hdisk178 - hdisk201 are RAID5 arrays, each with 30 x 15K RPM HDDs
               •   For the 116-drive mixed SSD - HDD configuration, each hdisk consists of 6
                   physical drives: hdisk202 - hdisk207 are RAID5 arrays, each with 6 x SSDs.
               •   Each write to a RAID5 array causes 4 drive I/Os (2 reads and 2 writes)

           3. "%tm act" shows the percentage of time where there is at least one I/O request
              outstanding to that hdisk. We need to look for hdisks that are at least 99% busy.

           4. read and write avg serv times indicate the average service time per transfer.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                 Page 13 of 23
Kbps          tps          Kb_read       Kb_wrtn
Physical               326144        77196.3          1892352       1372352
Disks:                               xfers                                      read                                                write
           %tm         bps           tps          bread         bwrtn           rps         avg             min         max         wps        avg       min    max
           Act                                                                              serv            serv        serv                   serv      serv   serv
hdisk178    100         16.1M         3929.7            10.6M          5.5M     2592.7             6         0.1        250.3         1337       2.5      0.2   261.8
hdisk179    100         15.7M         3844.9            10.3M          5.4M     2522.5            5.9        0.1        223.4         1322       3.2      0.2   275.1
hdisk180   99.9          8.8M         2148.8             5.6M          3.2M      1379             4.3        0.1        199.6         769.7      3.7      0.2   123.4
hdisk181    100          9.1M         2216.9             5.8M          3.2M     1423.7            4.1        0.1        214.5         793.2          2    0.2   122.7
hdisk182   99.6          9.1M         2230.8             5.9M          3.2M     1444.4            4.2        0.1        205.1         786.4      2.8      0.2   230.4
hdisk183    100          9.2M           2234             5.9M          3.3M     1433.7            4.1        0.1        220.8         800.3      3.3      0.2   122.7
hdisk184    100         15.7M         3833.4             9.7M          6.0M     2368.7            7.6        0.1         542          1465       2.5      0.2   448.8
hdisk185    100         15.7M           3842             9.7M          6.0M     2380.2            7.3        0.1        330.1         1462       3.3      0.2    280
hdisk186   99.9          9.0M           2193             5.6M          3.3M     1375.8            4.9        0.1        117.4         817.2      3.9      0.2   101.9
hdisk187   99.5          8.9M         2183.9             5.6M          3.3M     1368.5             4         0.1        125.8         815.4      1.9      0.2     91
hdisk188   99.5          9.0M         2208.8             5.6M          3.4M      1378             3.9        0.1         270          830.8      2.8      0.2    91.1
hdisk189   99.8          9.0M         2203.7             5.6M          3.4M     1373.5            3.9        0.1         128          830.2      3.3      0.2   120.6
hdisk190    100         15.4M         3761.9             9.8M          5.6M     2384.6            6.2        0.1        207.2         1377       1.9      0.2   318.1
hdisk191    100         15.4M         3765.6             9.8M          5.6M     2400.3            6.1        0.1        237.8         1365       2.6      0.2    344
hdisk192   99.8          9.1M         2218.2             5.9M          3.2M     1444.1            4.3        0.1        202.1         774.1      2.8      0.2   179.7
hdisk193   99.5          9.2M         2245.8             6.0M          3.2M     1468.5            4.2        0.1        213.6         777.2      1.7      0.2   176.5
hdisk194   99.9          9.5M         2317.2             6.2M          3.3M     1521.3            4.2        0.1        234.6         795.9      2.2      0.2   176.5
hdisk195   99.8          9.5M           2311             6.2M          3.3M      1513             4.2        0.1        195.3           798      2.5      0.2    177
hdisk196    100         15.6M           3802             9.7M          5.9M     2369.6            6.1        0.1         250          1432       1.8      0.2   414.9
hdisk197    100         15.5M         3773.7             9.6M          5.9M     2345.5            6.1        0.1         238          1428       2.5      0.2   330.8
hdisk198   99.8          8.6M           2095             5.3M          3.3M     1289.6            4.4        0.1        228.9         805.4      2.8      0.2    184
hdisk199   99.4          8.6M         2102.5             5.3M          3.4M     1284.6            4.3        0.1        204.8         817.9      1.6      0.2    184
hdisk200   99.5          9.4M         2292.6             6.1M          3.3M     1481.1            4.2        0.1        215.2         811.5      2.1      0.2   181.8
hdisk201   99.8          9.5M         2310.4             6.1M          3.3M     1495.3            4.2        0.1        226.4         815.1      2.4      0.2   181.8
hdisk320   27.3         12.8M         3108.9              0.0         12.8M           0.0         0.0        0.0          0.0         3109       0.1      0.1    15.5
hdisk321   31.4         12.9M         3143.7              0.0         12.9M           0.0         0.0        0.0          0.0         3144       0.1      0.1    16.7


Table 2:   800-HDD Experiment iostat –D Output

           Kbps          tps           Kb_read          Kb_wrtn
Physical   538923        127744         3406752         1991904
Disks:                   xfers                                                    read                                               write
           %tm           bps           tps              bread         bwrtn       rps              avg        min        max         wps        avg      min    max
           act                                                                                     serv       serv       serv                   serv     serv   serv
hdisk202         100         79.1M         19302.1        53.0M         26.1M         12942.2       1.7           0.1      57          6360      3.9      0.5   49.9
hdisk203         100         76.7M         18736.1        51.1M         25.6M         12476.3       1.7           0.1     61.5       6259.8      3.8      0.5   56.2
hdisk204         100         76.8M         18753.8        51.7M         25.1M         12622.1       1.5           0.1     41.8       6131.7      3.6      0.4   51.3
hdisk205         100         78.9M         19259.9        52.8M         26.1M         12888.9       1.7           0.1      40          6371      3.9      0.6   50.3
hdisk206         100         77.3M         18879.8        51.6M         25.8M         12588.7       1.6           0.1     54.7       6291.1      3.7      0.5   52.1
hdisk207         100         77.2M         18853.8        51.9M         25.3M         12681.5       1.5           0.1     58.4       6172.3      3.6      0.5   59.4
hdisk320     35.4            13.7M           3292.4             0       13.7M               0           0          0            0    3292.4      0.1      0.1     18
hdisk321     36.3            13.6M           3273.4             0       13.6M               0           0          0            0    3273.4      0.1      0.1   18.2


Table 3: Mixed SSD - HDD Experiment iostat –D Output

Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                                                          Page 14 of 23
Identifying Hot Logical Volumes
        After using iostat to determine that there are hot hdisks on a system, the next step is to
        use filemon to find the hot logical volumes (LVs). The LVs map to the database tables.
        Filemon provides summary and detailed performance reports on files, LVs and Physical
        Volumes (PVs). The filemon output below includes the LV and PV summary reports and
        some examples from the detailed LV reports.

        How to run filemon:
        Filemon can be run in either online mode or offline mode using a previously collected
        trace. The offline method, used for this data, is described below. Note that running the
        AIX trace command can cause significant performance degradation if the system CPU is
        very busy. This causes some of the SSD throughputs reported by filemon to be lower
        than those reported by iostat. The HDD results are not affected because there were
        plenty of spare CPU cycles on that experiment due the substantial I/O wait time.

                        trace -andfp -C all -T 30000000 -L 30000000 –o filename.trc
                        gensyms -F > gensyms.out
                                  -F option provides the file, LV, and hdisk names needed for filemon
                        filemon -i trcfile -n gensyms.out -O detailed,all -o filemon.out

        Filemon’s "Most Active Logical Volumes" table sorts the LVs based on their utilization.
        LVs with low utilizations typically do not need to be moved to SSDs. LVs with high
        utilizations are good candidates for further investigation regarding whether they should be
        moved to SSDs (having a high utilization does not necessarily indicate a performance
        problem).

        The 800-HDD "Most Active Logical Volumes" filemon report, depicted in Table 4, shows
        there are 28 LVs that were at least 91% busy during the trace. The last LV listed is only
        68% busy. The rest of the LVs on the system are even less busy and are not shown here.
        The database tables that were on the 28 busiest LVs for the 800-HDD experiment were
        all moved to SSDs for the 116-drive SSD experiment.


                                   800-HDD                                          116 Drives (Mixed SSD - HDD)
                        Most Active Logical Volumes                                     MostActiveLogicalVolumes
     util       #rblk      #wblk     KB/s      volume                 util           #rblk   #wblk    KB/s        volume


            1    3592         2248   10521.2   /dev/hddR04V1S                0.98     6544     4888     17968.5   /dev/ssdR02V2S
      0.99       5440         2024    13447    /dev/hddR04V2S                0.98     7312     3352     16761.4   /dev/ssdR04V3S
      0.99       4200         1288    9887.1   /dev/hddR01V4S                0.98     7568     4136      18396    /dev/ssdR03V4S
      0.99       5256         2120   13288.4   /dev/hddR03V3S                0.98     5976     4976     17214.1   /dev/ssdR02V4S
      0.99       5816         1080   12423.7   /dev/hddR01V2S                0.97     6984     5816     20118.7   /dev/ssdR02V3S
      0.99       5288         1160   11616.6   /dev/hddR01V1S                0.97     5928     5120     17364.9   /dev/ssdR02V1S
      0.99       4136         2056   11155.4   /dev/hddR03V4S                0.97     5760     5120     17100.9   /dev/ssdR01V4S
      0.99       4992         1552   11789.5   /dev/hddR02V2S                0.97     5152     3600     13756.2   /dev/ssdR04V2S
      0.99       6344         1456   14052.3   /dev/hddR02V1S                0.97     5864     3520     14749.5   /dev/ssdR03V3S
      0.99       4264         2168   11587.7   /dev/hddR03V2S                0.96     6864     3440     16195.5   /dev/ssdR03V1S
      0.99       5096         1208   11357.1   /dev/hddR01V3S                0.96     4864     4136      14146    /dev/ssdR04V4S
      0.99       4592         2168   12178.7   /dev/hddR03V1S                0.96     6456     3528     15692.6   /dev/ssdR04V1S
      0.99       6680         1240   14268.5   /dev/hddR02V4S                0.95     6768     3968     16874.6   /dev/ssdR03V2S
      0.99       4912         1776   12048.9   /dev/hddR04V3S                0.93     5344     5616     17226.6   /dev/ssdR01V3S
      0.98       4048         1904    10723    /dev/hddR04V4S                0.91     4880     4912     15390.8   /dev/ssdR01V2S
      0.98       4936         1408   11429.2   /dev/hddR02V3S                0.91     3880     4968      13907    /dev/ssdR01V1S

Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                   Page 15 of 23
0.98     1792      1512    5952.4   /dev/hddR03V1C                0.86     3136      3288      10097.1   /dev/ssdR03V2C
       0.98     1608      1256    5159.7   /dev/hddR04V1C                0.81     3320      1176       7066.7   /dev/ssdR02V1C
       0.97     1720      1248    5347.1   /dev/hddR04V3C                 0.8     2600      2968       8751.6   /dev/ssdR04V2C
       0.97     1624       584    3977.9   /dev/hddR02V2C                0.77     2872      3136       9443.2   /dev/ssdR04V1C
       0.97     1608      1328    5289.4   /dev/hddR04V2C                0.77     3224      1024       6676.9   /dev/ssdR01V1C
       0.97     1736       664    4323.8   /dev/hddR02V3C                0.73        0     14520      22822.1   /dev/dbloglv
       0.96     1592       632    4006.7   /dev/hddR01V3C                0.71     1960      3264       8210.9   /dev/ssdR03V3C
       0.95     1816       608      4367   /dev/hddR01V2C                0.71     2336      3192       8688.8   /dev/ssdR03V1C
       0.94     1624      1368    5390.3   /dev/hddR03V2C                0.68     2328      3016       8399.6   /dev/ssdR04V3C
       0.94     1448       728    3920.2   /dev/hddR01V1C                0.68     2712      1176       6111.1   /dev/ssdR02V3C
       0.93     1512       760    4093.2   /dev/hddR02V1C                0.66     2352      1096       5419.5   /dev/ssdR01V3C
       0.92     1592      1448    5476.8   /dev/hddR03V3C                0.63     2192      1040         5080   /dev/ssdR02V2C
       0.68        0     11424   20581.2   /dev/dbloglv                  0.61     2016      1224       5092.5   /dev/ssdR01V2C
         Table 4: 800-HDD and Mixed SSD-HDD filemon Report


         Detailed Logical Volumes Tables
         Detailed reports are shown for both a hot LV that is a good candidate to move to an SSD
         and for the database log LV, which is not a good candidate. The reports are included in
         Tables 5, 6, 7 and 8.

         Hot LV details
         Important things to note here are:
             1. The average I/O size is 4KB (8.0 512 byte blocks, which is a good match for
                SSDs)
             2. The I/O is completely random (read and write sequences are equal to the
                number of reads and write)
             3. Read response times are relatively long
             4. Average seek distance is very long (20.9GB).

  VOLUME: /dev/hddR04V1S description: raw                           VOLUME: /dev/ssdR02V2S description: raw
 reads:        449 (0 errs)                                         reads:             818 (0 errs)
   read sizes (blks): avg 8.0 min     8 max    8 sdev 0.0             read sizes (blks): avg 8.0 min      8 max    8 sdev 0.0
   read times (msec): avg 5.801 min 0.118 max 34.264 sdev 4.517       read times (msec): avg 1.030 min 0.314 max 14.616 sdev 1.894
   read sequences: 449                                                read sequences:       818
   read seq. lengths: avg 8.0 min    8 max    8 sdev 0.0              read seq. lengths: avg 8.0 min       8 max    8 sdev 0.0
 writes:      281 (0 errs)                                          writes:            611 (0 errs)
   write sizes (blks): avg 8.0 min    8 max   8 sdev 0.0              write sizes (blks): avg 8.0 min    8 max    8 sdev 0.0
   write times (msec): avg 1.194 min 0.373 max 4.414 sdev 0.641       write times (msec): avg 3.066 min 0.853 max 18.028 sdev 2.961
   write sequences: 281                                               write sequences:      611
   write seq. lengths: avg 8.0 min     8 max   8 sdev 0.0             write seq. lengths: avg 8.0 min     8 max    8 sdev 0.0
 seeks:        730 (100.0%)                                         seeks:              1429 (100.0%)
   seek dist (blks): init 105356576,                                  seek dist (blks): init 15678600,
        avg 40796294.3 min 14072 max 115550480 sdev 28644621.9                       avg 39276756.9 min 18680 max 117667808 sdev
 time to next req(msec): avg 0.380 min 0.000 max 3.741 sdev         28871870.5
 0.559                                                              time to next req(msec): avg 0.222 min 0.000 max 5.564 sdev
 throughput: 10521.2 KB/sec                                         0.410
 utilization:   1.00                                                throughput:           17968.5 KB/sec
                                                                    utilization:       0.98




Table 5: 800-HDD Detailed filemon LV Report                       Table 6: Mixed SSD-HDD Detailed filemon LV Report




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                 Page 16 of 23
Database log details
                The database log details are shown here as an example of data that would not benefit
                from SSDs (the log is on HDDs in both cases):
                    1. The I/O is very sequential. (Note that the log is striped across two hdisks, which
                       causes filemon to report a substantial number of write sequences)
                    2. The response times for both runs are very short due to the storage array's write
                       cache
                    3. Average seek distance is very short (8KB).


  VOLUME: /dev/dbloglv description: raw                                VOLUME: /dev/dbloglv description: raw
  writes:      1424 (0 errs)                                           writes:            1787 (0 errs)
    write sizes (blks): avg 8.0 min     8 max   16 sdev 0.4              write sizes (blks): avg 8.1 min     8 max    32 sdev 1.3
    write times (msec): avg 0.132 min 0.114 max 0.952 sdev 0.035         write times (msec): avg 0.131 min 0.116 max 0.885 sdev 0.033
    write sequences: 487                                                 write sequences:      553
    write seq. lengths: avg 23.5 min      8 max  56 sdev 5.8             write seq. lengths: avg 26.3 min      8 max   88 sdev 7.9
  seeks:        487 (34.2%)                                            seeks:              553 (30.9%)
    seek dist (blks): init 25305528,                                     seek dist (blks): init 57817232,
         avg 16.0 min        16 max  16 sdev 0.0                                        avg 16.0 min     16 max   16 sdev 0.0
  time to next req(msec): avg 0.195 min 0.130 max 3.317 sdev 0.137     time to next req(msec): avg 0.178 min 0.131 max 8.299 sdev
  throughput: 20581.2 KB/sec                                           0.218
  utilization:    0.68                                                 throughput:           22822.1 KB/sec
                                                                       utilization:        0.73




Table 7: 800-HDD Experiment filemon database log                     Table 8: Mixed SSD-HDD Experiment filemon database log


                Which LVs Should be Moved to SSD
                Once the hot LVs are known, use the "lslv" command to find the LV sizes and calculate
                the IOPS/GB. LVs with the highest IOPS/GB should be moved first.


                Using DB2 Snapshot to Identify Hot Logical Volumes
                The DB2 snapshot monitor tool provides another means to identify hot tablespaces that
                are best candidates to place on SSDs. It is used to capture information about the
                database and any connected applications at a specific time. DB2 tablespace snapshot
                provides the following information:

                     •    Tablespace Name
                     •    Tablespace Page size
                     •    Number of used pages
                     •    Bufferpool data/index/xda physical reads
                     •    Bufferpool read/write time

                To identify which containers are hot, it is necessary to analyze the following properties:
                     • Access density - which is a function of number of physical I/Os relative to
                         number of used pages in the tablespace.
                     • Access latency - which is a measure of latency for those physical I/Os.
                     • Relative weight for tablespace that is calculated to help prioritize between
                         different tablespaces to place on SSD. This is a function of access density and
                         access latency.
                     • Sequential ratio of accesses – ratio of sequential to random access



       Driving Business Value on Power Systems with Solid State Drives
       © Copyright IBM Corporation 2009 All Rights Reserved                                             Page 17 of 23
The weighting factor is used to determine which tablespaces are better candidates to
       place on SSDs. The steps below show how to compute the weighting factor:

               Total physical I/Os =
               ( Bufferpool data physical reads + Bufferpool index physical reads + Bufferpool xda
               physical reads + Bufferpool temporary data physical reads + (Direct reads *
               512 )/tablespace page size)

               Page Velocity = (Total physical I/Os) / (snapshot interval in seconds)

               Access time = (Total buffer pool read time + Direct reads elapsed time)

               Access density = (Page Velocity) / (number of used pages in tablespace)

               Access latency = (Access time) / (Total physical I/Os)

               Weighting factor = (Access density) * (Access latency)

               Sequentiality ratio = (Asynchronous pool data page reads + Asynchronous pool index
               page reads + Asynchronous pool xda page reads)/
               (Buffer pool data physical reads+ Buffer pool index physical reads + Bufferpool xda
               physical reads)

       When the above information is summarized for all tablespaces based on descending
       order of weighting factor, those tablespaces that have higher weighting factor are better
       candidate for SSDs. Tablespaces that have lower sequential ratio are better candidates
       for HDDs.

       Table 9 shows an example of some data from DB2 tablespaces snapshot taken from the
       800-HDD configuration:

               db2 get snapshot for tablespaces on DBNAME

               Tablespace name                                TS_S_13       TS_OL_1
               Tablespace Page size (bytes)                   4096          4096
               Number of used pages                           15001088      8561920
               Buffer pool data physical reads                10162297      458610
               Buffer pool temporary data physical reads      0             0
               Asynchronous pool data page reads              0             0
               Buffer pool index physical reads               0             0
               Buffer pool temporary index physical reads     0             0
               Asynchronous pool index page reads             0             0
               Buffer pool xda physical reads                 0             0
               Buffer pool temporary xda physical reads       0             0
               Asynchronous pool xda page reads               0             0
               Total buffer pool read time (millisec)         26705251      1189809
               Total elapsed asynchronous read time           0             0
               Direct reads                                   0             0
               Direct reads elapsed time (ms)                 0             0

                 Table 9: Tablespace snapshot for TS_S_13 and TS_OL_1


       Summary of tablespace weighting factor for TS_S_13 and TS_OL_1 is as follows:


Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                     Page 18 of 23
TS_S_13         TS_OL_1
               Total physical I/Os   21099197         458610
               Page Velocity         2109919.7         45861
               Access time           26705251        1189809
               Access density          1.406           0.053
               Access latency          1.265           2.504
               Weighting factor        1.779           0.133
               Sequentiality ratio       0               0

               Table 10: Tablespace weighting factor for TS_S_13 and TS_OL_1

       Tablespace TS_S_13 has much higher “Weighting factor” than tablespace TS_OL_1.
       Therefore, it’s a better candidate for moving to a SSD.




Migration Tools
       As discussed earlier in this paper, after identifying the hot tablespaces using iostat,
       filemon or DB2 tablespace snapshot, the next step is to move the hot tablespaces from
       HDDs to SSDs. There are several tools available for data migration. This paper focuses
       on using IBM Softek Transparent Data Migration Facility (TDMF) and AIX migratepv.


       Softek TDMF
       Softek TDMF allows customers to move data between unlike storage vendors and switch
       over to new storage with no interruption to business generating applications. Softek
       TDMF is host-based software that moves data at the block level between logical volumes
       without interrupting reads and writes to those volumes.

       Figure 14 shows the Softek architecture which consists of a command line interface (CLI),
       filter driver, and configuration files.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                             Page 19 of 23
source                 target


                                                      HDD                 SSD


       Figure 14: Softek TDMF Migration Tool Architecture

       There are two ways to migrate data using Softek TDMF: Dynamic activation and auto
       switchover. Both methods support migration with no disruption to the application.

       Below is an example of how to migrate data using auto switchover mode. This mode can
       be used to migrate data anytime with minimal performance impact.

          •   Step 1: Creating a migration volume and associating it to a valid source volume
                  # tdmf create tR01V3S /dev/hddR01V3S

          •   Step 2: Adding a target volume to migration volume
                  # tdmf add tR01V3S /dev/ssdR01V3S

          •   Step 3: Starting migration and auto switchover after migration is done
                  # tdmf copy -x tR01V3S

          •   Step 4: Remove the old source volume, new source volume takes over
                  # tdmf remove tR01V3S




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                              Page 20 of 23
With auto switchover mode, the source volume (old volume on HDD) is removed after the
       data is migrated to the target volume (new volume on SSD). Mirrored writes to both the
       old and new volumes are no longer required.

       Figure 15 shows the progress of a migration of hot tablespaces from HDDs to SSDs
       while an OLTP application is running. Before the migration started, the database
       transaction response time was about 0.04 seconds. During the migration, the application
       ran uninterrupted. The response time increased to 0.07 seconds. After the migration
       completed, the database transaction response time reduced to 0.02 seconds and system
       throughput almost doubled.



                                              Data Migration Performance Impact
                           6000

                                        Hot data                             Hot data
                           5000         on HDD                               on SSD
        Throughput (tps)




                           4000


                                                          Data Migration
                           3000



                           2000



                           1000



                             0

                                                          Interval (10s)
       Figure 15: Performance Impact of Migrating Data from HDD to SSD



       Migratepv
       Another tool that can be used when migrating data is the AIX command migratepv, which
       moves physical partitions from one AIX hdisk to one or more hdisks. The following is an
       example of using the migratepv command to move the partitions from hdisk20 to hdisk30:

                             migratepv hdisk20 hdisk30

       The migrate function works by creating a mirror of the logical volumes involved and then
       resynchronizing the logical volumes. When the migration is in progress the volume group
       is locked so only one hdisk can be migrated at a time.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                    Page 21 of 23
Conclusion

       Customer I/O demands have outpaced the performance capabilities of traditional hard
       disk drives. Latencies associated with spinning platters and moving arms limit the speed
       of HDD data access. SSDs’ near instantaneous data access removes this I/O bottleneck,
       creating a paradigm shift in I/O performance. Applications throttled by poor I/O
       performance can benefit greatly from SSDs. For these I/O-intensive applications, it is no
       longer necessary to “short stroke” drives in order to achieve good I/O performance.

       As demonstrated in our experiments, SSDs result in a substantial improvement in I/O
       performance which translates to increased business output, reduced energy consumption,
       reduced floor space requirements, and cost savings. Our experiments showed the
       following SSD benefits when comparing 800 HDDs with 36 SSDs and 80 HDDs:

           •   1.65X system throughput improvement
           •   1.72X application response time improvement
           •   3X improvement in drive read response time
           •   33X improvement in throughput per drive
           •   90% reduction in energy consumption
           •   84% reduction in floor space requirements

       Additional experiments with an equal number of HDDs and SSDs resulted in 4X better
       drive read response times and 42X better throughput per drive.

       SSDs can deliver significant business value to a broad spectrum of industries facing
       various business challenges. We illustrated two workloads representing real world
       customer applications in the financial sector and consumer products industry. Both
       scenarios showed advantages for migrating to SSDs, including improved transaction
       response times, increased numbers of users, and storage cost savings.

       The superior performance of SSDs must be balanced with cost. Multi-tiered storage
       solutions can provide that balance. An application’s “hot” data can be moved to SSDs,
       while less active data can remain on lower cost HDDs. IBM provides tooling and services
       to assist customers in characterizing their data access patterns and helping them make
       smart data placement choices for the highest performing – lowest cost storage solution.




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                             Page 22 of 23
For More Information
IBM Power Servers

                                   ibm.com/systems/p
IBM Storage
                                   ibm.com/services/storage


                                                                  The provision of the information contained herein is not
Notices and Disclaimer                                            intended to, and does not; grant any right or license under any
                                                                  IBM patents or copyrights. Inquiries regarding patent or
                                                                  copyright licenses should be made, in writing, to:
Copyright © 2009 by International Business Machines
Corporation.
                                                                  IBM Director of Licensing
                                                                  IBM Corporation
No part of this document may be reproduced or transmitted in
                                                                  North Castle Drive
any form without written permission from IBM Corporation.
                                                                  Armonk, NY 10504-1785
                                                                  U.S.A.
Product data has been reviewed for accuracy as of the date of
initial publication. Product data is subject to change without
                                                                  IBM, IBM Power, AIX, DB2, DB2 9.5, Enterprise Storage
notice. This information may include technical inaccuracies or
                                                                  Server, FlashCopy, TotalStorage, are trademarks of
typographical errors. IBM may make improvements and/or
                                                                  International Business Machines Corporation in the United
changes in the product(s) and/or programs(s) at any time
                                                                  States, other countries, or both.
without notice. References in this document to IBM products,
programs, or services does not imply that IBM intends to
make such products, programs or services available in all         Other company, product or service names may be trademarks
countries in which IBM operates or does business.                 or service marks of others

THE INFORMATION PROVIDED IN THIS DOCUMENT IS
DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY,
EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY
DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE OR NON-
INFRINGEMENT. IBM shall have no responsibility to update
this information. IBM products are warranted according to the
terms and conditions of the agreements (e.g., IBM Customer
Agreement, Statement of Limited Warranty, International
Program License Agreement, etc.) under which they are
provided. IBM is not responsible for the performance or
interoperability of any non-IBM products discussed herein.

The performance data contained herein was obtained in a
controlled, isolated environment. Actual results that may be
obtained in other operating environments may vary
significantly. While IBM has reviewed each item for accuracy
in a specific situation, there is no guarantee that the same or
similar results will be obtained elsewhere.

Statements regarding IBM’s future direction and intent are
subject to change or withdrawal without notice, and represent
goals and objectives only.




                                                                                                             POW03025USEN-00




Driving Business Value on Power Systems with Solid State Drives
© Copyright IBM Corporation 2009 All Rights Reserved                                                    Page 23 of 23

Más contenido relacionado

La actualidad más candente

IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...
IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...
IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...IBM India Smarter Computing
 
Intel Server & Data Center Optimization Plan
Intel Server & Data Center Optimization PlanIntel Server & Data Center Optimization Plan
Intel Server & Data Center Optimization PlanUmair Mohsin
 
Smarter Backup
Smarter BackupSmarter Backup
Smarter BackupIBM
 
IBM Storage Optimization Free Self-Assessment Tool
 IBM Storage Optimization Free Self-Assessment Tool IBM Storage Optimization Free Self-Assessment Tool
IBM Storage Optimization Free Self-Assessment ToolIBM India Smarter Computing
 
Digital Fuel IT Financial Management -Storage cost visibility_0911
Digital Fuel IT Financial Management -Storage cost visibility_0911Digital Fuel IT Financial Management -Storage cost visibility_0911
Digital Fuel IT Financial Management -Storage cost visibility_0911yisbat
 
Smarter Datacenter - Sanjeev Gupta
Smarter Datacenter - Sanjeev GuptaSmarter Datacenter - Sanjeev Gupta
Smarter Datacenter - Sanjeev GuptaJyothi Satyanathan
 
Cloud - Acxiom Case StudyOrganic web assetNew asset
Cloud - Acxiom Case StudyOrganic web assetNew assetCloud - Acxiom Case StudyOrganic web assetNew asset
Cloud - Acxiom Case StudyOrganic web assetNew assetIBM India Smarter Computing
 
Managed Data Services
Managed Data ServicesManaged Data Services
Managed Data ServicesSimon Dale
 
Managed Data Services
Managed Data ServicesManaged Data Services
Managed Data ServicesMark Halpin
 
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData Sheet
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData SheetIBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData Sheet
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData SheetIBM India Smarter Computing
 
How to enhance performance and reduce energy costs in the midmarket with ibm ...
How to enhance performance and reduce energy costs in the midmarket with ibm ...How to enhance performance and reduce energy costs in the midmarket with ibm ...
How to enhance performance and reduce energy costs in the midmarket with ibm ...IBM India Smarter Computing
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Bladespankaj009
 

La actualidad más candente (15)

IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...
IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...
IBM Launches Newest ProtecTIER Appliance — Positioning Data Dedupe for Mid-Ma...
 
Intel Server & Data Center Optimization Plan
Intel Server & Data Center Optimization PlanIntel Server & Data Center Optimization Plan
Intel Server & Data Center Optimization Plan
 
Smarter Backup
Smarter BackupSmarter Backup
Smarter Backup
 
IBM System Storage SAN Volume Controller
IBM System Storage SAN Volume ControllerIBM System Storage SAN Volume Controller
IBM System Storage SAN Volume Controller
 
IBM Storage Optimization Free Self-Assessment Tool
 IBM Storage Optimization Free Self-Assessment Tool IBM Storage Optimization Free Self-Assessment Tool
IBM Storage Optimization Free Self-Assessment Tool
 
Digital Fuel IT Financial Management -Storage cost visibility_0911
Digital Fuel IT Financial Management -Storage cost visibility_0911Digital Fuel IT Financial Management -Storage cost visibility_0911
Digital Fuel IT Financial Management -Storage cost visibility_0911
 
Smarter Datacenter - Sanjeev Gupta
Smarter Datacenter - Sanjeev GuptaSmarter Datacenter - Sanjeev Gupta
Smarter Datacenter - Sanjeev Gupta
 
Provident Financial-Cisco
Provident Financial-CiscoProvident Financial-Cisco
Provident Financial-Cisco
 
Cloud - Acxiom Case StudyOrganic web assetNew asset
Cloud - Acxiom Case StudyOrganic web assetNew assetCloud - Acxiom Case StudyOrganic web assetNew asset
Cloud - Acxiom Case StudyOrganic web assetNew asset
 
A blueprint for smarter storage management
A blueprint for smarter storage managementA blueprint for smarter storage management
A blueprint for smarter storage management
 
Managed Data Services
Managed Data ServicesManaged Data Services
Managed Data Services
 
Managed Data Services
Managed Data ServicesManaged Data Services
Managed Data Services
 
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData Sheet
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData SheetIBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData Sheet
IBMSystem x3850 X5and x3950 X5 IBMSystems and TechnologyData Sheet
 
How to enhance performance and reduce energy costs in the midmarket with ibm ...
How to enhance performance and reduce energy costs in the midmarket with ibm ...How to enhance performance and reduce energy costs in the midmarket with ibm ...
How to enhance performance and reduce energy costs in the midmarket with ibm ...
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Blades
 

Similar a Driving Business Value on Power Systems with Solid-state Drives

HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitachi Vantara
 
Analysis of SOFTWARE DEFINED STORAGE (SDS)
Analysis of SOFTWARE DEFINED STORAGE (SDS)Analysis of SOFTWARE DEFINED STORAGE (SDS)
Analysis of SOFTWARE DEFINED STORAGE (SDS)Kaushik Rajan
 
datacore-1-341M4XT
datacore-1-341M4XTdatacore-1-341M4XT
datacore-1-341M4XTGary Mason
 
Product Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About StorageProduct Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About StorageIBM India Smarter Computing
 
Workload Centric Scale-Out Storage for Next Generation Datacenter
Workload Centric Scale-Out Storage for Next Generation DatacenterWorkload Centric Scale-Out Storage for Next Generation Datacenter
Workload Centric Scale-Out Storage for Next Generation DatacenterCloudian
 
Scale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary StorageScale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary StorageInteractiveNEC
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Vantara
 
Hitachi comparative-virtual-storage-platform-g1000
Hitachi comparative-virtual-storage-platform-g1000Hitachi comparative-virtual-storage-platform-g1000
Hitachi comparative-virtual-storage-platform-g1000Md Mosaddeq Hossain
 
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)VNU Exhibitions Europe
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskIT Brand Pulse
 
Efficient storage solutions for your dynamic infrastructure
Efficient storage solutions for your dynamic infrastructureEfficient storage solutions for your dynamic infrastructure
Efficient storage solutions for your dynamic infrastructureIBM India Smarter Computing
 
galileo-white-paper-tier-0
galileo-white-paper-tier-0galileo-white-paper-tier-0
galileo-white-paper-tier-0Tim Conley
 
VSP G1000 Checklist - 7 Q's to ask your storage vendor?
VSP G1000 Checklist - 7 Q's to ask your storage vendor? VSP G1000 Checklist - 7 Q's to ask your storage vendor?
VSP G1000 Checklist - 7 Q's to ask your storage vendor? Hitachi Vantara
 
Presentation dell™ power vault™ md3
Presentation   dell™ power vault™ md3Presentation   dell™ power vault™ md3
Presentation dell™ power vault™ md3xKinAnx
 
G11.2014 magic quadrant for general-purpose disk
G11.2014   magic quadrant for general-purpose diskG11.2014   magic quadrant for general-purpose disk
G11.2014 magic quadrant for general-purpose diskSatya Harish
 
Intel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsIntel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsMariaDB plc
 

Similar a Driving Business Value on Power Systems with Solid-state Drives (20)

HitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution ProfileHitVirtualized Tiered Storage Solution Profile
HitVirtualized Tiered Storage Solution Profile
 
Software defined storage
Software defined storageSoftware defined storage
Software defined storage
 
Analysis of SOFTWARE DEFINED STORAGE (SDS)
Analysis of SOFTWARE DEFINED STORAGE (SDS)Analysis of SOFTWARE DEFINED STORAGE (SDS)
Analysis of SOFTWARE DEFINED STORAGE (SDS)
 
datacore-1-341M4XT
datacore-1-341M4XTdatacore-1-341M4XT
datacore-1-341M4XT
 
Storage Virtualization isn’t About Storage
Storage Virtualization isn’t About StorageStorage Virtualization isn’t About Storage
Storage Virtualization isn’t About Storage
 
Product Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About StorageProduct Brief Storage Virtualization isn’t About Storage
Product Brief Storage Virtualization isn’t About Storage
 
Workload Centric Scale-Out Storage for Next Generation Datacenter
Workload Centric Scale-Out Storage for Next Generation DatacenterWorkload Centric Scale-Out Storage for Next Generation Datacenter
Workload Centric Scale-Out Storage for Next Generation Datacenter
 
Scale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary StorageScale-Out Architectures for Secondary Storage
Scale-Out Architectures for Secondary Storage
 
Hitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison GuideHitachi Virtual Storage Platform Competitive Comparison Guide
Hitachi Virtual Storage Platform Competitive Comparison Guide
 
Hitachi comparative-virtual-storage-platform-g1000
Hitachi comparative-virtual-storage-platform-g1000Hitachi comparative-virtual-storage-platform-g1000
Hitachi comparative-virtual-storage-platform-g1000
 
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
Josh Krischer - How to get more for less (4 november 2010 Storage Expo)
 
Backing Up Mountains of Data to Disk
Backing Up Mountains of Data to DiskBacking Up Mountains of Data to Disk
Backing Up Mountains of Data to Disk
 
Efficient storage solutions for your dynamic infrastructure
Efficient storage solutions for your dynamic infrastructureEfficient storage solutions for your dynamic infrastructure
Efficient storage solutions for your dynamic infrastructure
 
galileo-white-paper-tier-0
galileo-white-paper-tier-0galileo-white-paper-tier-0
galileo-white-paper-tier-0
 
VSP G1000 Checklist - 7 Q's to ask your storage vendor?
VSP G1000 Checklist - 7 Q's to ask your storage vendor? VSP G1000 Checklist - 7 Q's to ask your storage vendor?
VSP G1000 Checklist - 7 Q's to ask your storage vendor?
 
Dell SalesPlayBook.pdf
Dell SalesPlayBook.pdfDell SalesPlayBook.pdf
Dell SalesPlayBook.pdf
 
Presentation dell™ power vault™ md3
Presentation   dell™ power vault™ md3Presentation   dell™ power vault™ md3
Presentation dell™ power vault™ md3
 
A New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paperA New Era in Midrange Storage IDC Analyst paper
A New Era in Midrange Storage IDC Analyst paper
 
G11.2014 magic quadrant for general-purpose disk
G11.2014   magic quadrant for general-purpose diskG11.2014   magic quadrant for general-purpose disk
G11.2014 magic quadrant for general-purpose disk
 
Intel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logsIntel and MariaDB: web-scale applications with distributed logs
Intel and MariaDB: web-scale applications with distributed logs
 

Más de IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

Más de IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Último

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 

Último (20)

"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 

Driving Business Value on Power Systems with Solid-state Drives

  • 1. u Driving Business Value on Power Systems with Solid State Drives April 2009 By Lotus Douglas, Qunying Gao, Lilian Romero, Linton Ward, and David Whitworth IBM Systems and Technology Group Sunil Kamath IBM Software Group, Information Management Jim Olson IBM Integrated Technology Delivery
  • 2. Executive Summary Solid State Drives (SSDs) offer a number of advantages over traditional hard disk drives (HDDs). With no seek time or rotational delays, SSDs can deliver substantially better I/O performance than HDDs. Capable of driving tens of thousands of I/O operations per second (IOPS), as opposed to hundreds for HDDs, SSDs break through performance bottlenecks of I/O-bound applications. Applications that require hundreds of HDDs can meet their I/O performance requirements with far fewer SSDs, resulting in energy, space, and cost savings. To demonstrate the benefits of SSDs, we ran experiments comparing SSDs with HDDs. The experiments showed a significant performance advantage with SSDs which resulted in a substantial reduction in the number of drives needed to meet the desired level of performance. Fewer drives translate into a smaller physical footprint, reduced energy consumption, and less hardware to maintain. The experiments also showed better application response times for SSDs, which leads to increased productivity and higher customer satisfaction. Solid state drive (SSD) technology was introduced more than three decades ago. Until recently, however, the high cost-per-gigabyte and limited capacity of SSDs restricted deployment of these drives to niche markets or military applications. Recent advances in SSD technology and economies of scale have driven down the cost of SSDs, making them a viable storage option for many I/O intensive enterprise applications. While the cost of SSDs is trending downward, the $/GB for SSDs is still substantially higher than that of HDDs. It is not cost-effective or necessary to replace all HDDs with SSDs. For instance, infrequently accessed (cold) data can reside on lower cost HDDs while frequently accessed (hot) data can be moved to SSDs for maximum performance. The appropriate mix of SSDs and HDDs should be used to strike a proper balance between performance and cost. This paper provides information to enable you to integrate SSDs into your storage infrastructure so that you can immediately take advantage of SSDs to improve your application performance and increase productivity. We describe how to deploy SSDs in a tiered storage environment to allow you to leverage your existing storage with SSDs for maximum performance and minimum cost. The paper also discusses IBM tools and services available to assist you in deploying and managing a storage solution with SSDs. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 2 of 23
  • 3. Leveraging SSDs in Tiered Storage Pools Many storage environments have grown to support a diversity of needs and evolved into disparate technologies that have lead to storage sprawl. In a large-scale storage infrastructure this yields a sub-optimal storage design that can be improved with a focus on data access characteristics analysis and management. Tiered storage is an approach of utilizing different types of storage throughout the storage infrastructure. It is a mix of higher performing/higher cost storage with lower performing/lower cost storage and placing data accordingly based on specific characteristics such as performance needs, age and importance of data availability. Properly balancing these tiers leads to the minimal cost – best performance solution. The focus of this paper is on the active, mission critical data. Typically this is regarded as Tier 1 storage. SSDs can be considered as a new Tier 0 for the fastest active data. Cost versus Performance Tier0 Ultra High Performance Tier1 High Performance Mission Critical Cost Tier2 Per Medium Gigabyte Performance Non-Mission Critical Tier3 Low Performance Archival/Tape Performance Figure 1: Tiered Storage Environment An example of an existing storage environment is shown in Figure 1. The design results in a significantly increased cost associated with maintaining and supporting the infrastructure. In addition to the immediate effect associated to this balance, growth continues at an increased rate in the higher cost area of Tier 1. Thus, as the growth occurs, the distribution of data would continue to grow in a non-optimal direction unless there is careful planning and discipline in deployment. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 3 of 23
  • 4. Typically, an optimal design would keep the active operational data in Tier 0 and Tier 1 and leverage Tiers 2 and 3 for less active data. An example is shown in Figure 2. The benefits associated with a Tiered storage approach are simple; it is all cost related. This approach will save significant cost associated with storage itself, as lower Tiered storage is less expensive. Beyond that, there are the environmental savings, such as energy, footprint, and cooling reductions. Storage Pyramid 1-3% Tier 0: Ultra high performance applications 15-20% Tier 1: Mission critical, revenue generating apps 20-25% Tier 2: Meet QoS for non-mission critical apps 50-60% Tier 3: Archives and long term retention Figure 2: Storage Pyramid How to implement Tiered Storage There are three areas of interest critical to implementing, maintaining and leveraging a tiered storage solution. These areas are software tools for identification and reporting of all components of the tiered storage solution, virtualization to enable control and allocation of your solution, and offerings that are designed to provide alignment with your specific needs for IT governance. Tivoli Productivity Center (formerly Total Storage Productivity Center) is a perfect example of software needed to execute data identification for implementation and management. It provides the capability to zero in on data characteristics that can be used to make choices on data placement in implementation and steady state. SAN Volume Controller enables virtualization for your storage environment. Virtualization is critical to maintaining a tiered storage solution as it provides the capability for your administrators to relocate data between tiers of storage without impacting the application and customer service levels. Virtualization allows you to leverage the tiered storage solution to provide the required flexibility for a dynamic infrastructure. IBM Novus Intelligent Storage Service Catalog (ISSC) offering is a single framework aimed at providing storage optimization through more efficient provisioning, better analytics of the storage environment and proper alignment of data to storage tiers. The intellectual capital which comprises ISSC is IBM’s Intelligent Storage Service Request (ISSR), Process Excellence, and Storage Enterprise Resource Planner (SERP).Through detailed interviews with the client, IBM is able to obtain a detailed understanding of the customer’s business requirements. ISSR promotes "right-tiering" and "right-sizing" of storage provisioning based on these business requirements acting as a front end interface for storage requests. Upon receipt of the ISSR, Process Excellence Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 4 of 23
  • 5. is utilized by the storage administrator to ensure that proper process and procedure are utilized at all times to eliminate costly errors or unknown challenges created by lack of standardization. In addition, use of Novus’s SERP software solution can provide very specific data characteristic information that when combined with the customer discussions, can result in a method of more effectively deploying and managing a tiered storage solution. Leveraging SSDs for a High Value Database Improving the response time of some database environments can yield a substantial benefit to business results. While a tiered storage strategy focuses on reducing the operational costs, some environments can leverage the improved I/O performance that SSDs provide. Further, beyond the benefits of improved performance, other implied benefits such as infrastructure simplification, ease of storage management, and reduced need for fine tuning skills are paramount and result in substantial IT efficiency and reduced costs. Storage management, performance, and cost are big issues in the database world. Database workloads, both transactional and data warehousing typically require lots of HDDs for I/O performance – both IOPS and bandwidth. Traditional enterprise HDDs, including the 15K RPM HDDs are limited by the rate of head movement and deliver random I/O performance of approximately 150 -175 IOPS with a latency of about 5 -7 msecs and sequential scan bandwidth of about 30 - 60 MB/sec for most database workloads. Write-intensive batch jobs are under pressure to complete within the increasingly shrinking time-window leading to reduced up-time for transactional database systems. In addition, maintenance jobs such as backup, restore, and database crash recovery which can induce too much pressure on I/O are also time critical and important to the business to maintain a highly operational database system. Backup operations tend to drive high levels of sequential I/Os while recovery processes drive high levels of random I/O. In many customer environments, to maintain the high IOPS rate required to service applications with reasonable response times, less data is placed on HDDs resulting in poor IOPS per gigabyte of available storage capacity. This implies that a lot of capacity on HDDs (greater than 50% in most cases) is wasted or under-utilized and the situation has only worsened with larger density HDDs. SSDs offer game-changing performance for database applications by removing the limitations traditional rotating disks impose on database design. This will revolutionize database architectural design by removing the traditional I/O bottleneck. SSDs eliminate the need to have a large number of under-utilized (short-stroked) HDDs to meet the heavy I/O demands of database applications. Customer Scenarios that can Benefit from SSDs A broad spectrum of industries from the financial sector to the consumer service industry, including government, with varied or common business challenges can benefit from SSD technology. These businesses at a fundamental level rely on improved responsiveness from their critical transactional, Customer Relationship Management (CRM) or data warehousing solutions that enable them to service their clients faster and react to changes and new opportunities more rapidly, resulting in improved profitability and increased revenue. With an explosion of data volumes and a need to convert them into Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 5 of 23
  • 6. trustable information with speed, SSDs help enable IT to address the critical storage challenges to satisfy business needs. The following business scenarios represent a few cases where SSD technology can deliver significant value. • Customer retention by servicing them with superior satisfaction. Enterprises that empower their customer support representatives to service their clients' needs in real time results in better customer loyalty. • 360 degree view of customer relationships that enables businesses to respond to market needs and more rapidly identify new opportunities • Real time and fast fraud detection enables enterprises spanning financial, insurance, consumer services organizations, etc to improve profitability and facilitate better customer value. • Faster reporting and business analytics capabilities empower organizations to deal with risk management in an efficient manner. • Faster order processing systems where speed of transaction processing lead to increased revenue and customer satisfaction. To illustrate the type of workloads that benefit from SSD technology, two scenarios from real world customer applications are chosen. Figure 3 shows a workload profile from a large enterprise in the consumer products company running their global and mission critical SAP R/3 workload with DB2™ on Power Systems. Figure 4 illustrates a workload profile from a global financial institution running DB2 on Power Systems which services tens of thousands of transactions per second. The SAP R/3 workload is an 8 TB DB2 database that is hosted off a single IBM System Storage DS8100 disk system with 14 TB of usable capacity. The database is over provisioned by nearly 75%, primarily due to the need for IOPS from physical disk spindles. However as can be noted in Figure 3, the CPU is still about 30-40% waiting on I/O. These workloads can benefit from migrating the storage from HDDs to SSDs within a DS8100 which will reduce I/O wait, improve SAP transaction response time and save on storage costs by eliminating the need to over provision storage. CPU and I/O Profile User% Sys% Wait% 100 80 60 40 20 0 Figure 3: CPU and I/O profile of SAP R/3 workload with DB2 on Power Systems The next scenario is from a large and global financial industry company. Figure 4 illustrates the CPU profile of a 200 gigabyte DB2 database that is servicing tens of Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 6 of 23
  • 7. thousands of transactions per second with hundreds of concurrent users. In this environment, the DB2 database was provisioned with a single DS4800 controller with two terabytes of RAID storage. This represents capacity that is ten times more than required to handle the I/O performance and therefore overall transaction response times. As can be noted from Figure 4, the CPU is still about 20% waiting on I/O. This scenario is another example where migrating to SSDs can result in consolidation of drives by up to 10 x while further improving the transaction response times and handling large volumes of concurrent users. CPU and IO profile User% Sys% W ait% 60 50 40 30 20 10 0 Figure 4: CPU and I/O profile of a DB2 workload on Power Systems handling large volume of transactions Quantifying Benefits of SSDs using an OLTP Workload On-line Transaction Processing (OLTP) applications are characterized by large numbers of users concurrently executing transactions against a database. OLTP applications typically have a significant amount of random I/O and a high number of physical reads with the goal of ensuring consistently low response times. Typical OLTP applications include banking applications, order processing systems, and airline reservation systems. OLTP transactions spend a great deal of time waiting on I/O. The I/O wait time is considerably longer for HDDs than SSDs due to delays inherent to HDD mechanical parts. SSDs are ideal for OLTP workloads since they do not have any seek or rotational delays and can process I/O faster. An SSD solution for OLTP applications can offer the following benefits: • a substantial reduction in the number of drives required • increased I/O and throughput performance • a substantial reduction in response time • a reduction in energy consumption • reduced lab space requirement To characterize the benefits of SSDs for transactional workloads, an in-house OLTP database application was chosen. For our experiments, the application characteristics were 60% random reads and about 40% random writes. The system configuration was as follows: Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 7 of 23
  • 8. Server Power 550 Express Model 8204- E8A with 128GB of memory ™ OS AIX 61 TL2 Database DB2 9.5 FP3 In total, three experiments were conducted by using different types of storage. For the base configuration, the entire database was placed on HDDs using a total of 800 drives in order to meet the response time requirements. The database was built using RAID5 where the tables containing the highest number of IOPS (hot data) were spread across 720 FC HDDs and the remaining tables (cold data) were spread across 80 SAS HDDs. The configuration is shown in Figure 5. Base Configuration using 800 HDDs 720 x 15K RPM FC HDDs 80 x 15K RPM SAS HDDs (Hot Data) (Cold Data) Figure 5: Base Configuration Using 800 HDDs In the second experiment a total of 116 drives were used consisting of a mix of 36 SSDs and 80 HDDs. The hot database tables were placed on the SSDs and the colds tables remained on the 80 SAS HDDs. The 36 SSDs were placed in 6 EXP12S drawers. Each EXP12S was attached to a PCI-X DDR 1.5 GB Cache SAS RAID Adapter. A total of six 5+P RAID5 arrays were created on the SSDs. The cache on RAID adapters can become a performance bottleneck for some workloads with this many SSDs on one adapter, so the RAID adapter cache was disabled for this experiment. The response times for these SSDs is so fast that the database saw very good response times on this workload even with the adapter's cache disabled. The configuration is depicted in Figure 6. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 8 of 23
  • 9. Mixed SSD – HDD Configuration (Hot data moved to SSDs) 6 SAS adapters 4 FC adapters 36 SSDs 80 x 15K RPM SAS HDDs (Hot Data) (Cold Data) Figure 6: Mixed SSD – HDD Configuration In the third and final experiment, a total of 116 drives were also used. The same number of HDDs (36) was used to hold the hot data as SSDs in the mixed storage configuration. Because of the price and performance differential, we do not expect that customers would do a one-to-one substitution of HDDs with SSDs. The experiment was designed to show a direct performance comparison between SSDs and HDDs. So, unlike the 800 HDD experiment, we did not “short stroke” the HDDs in order to achieve better I/O performance. RAID5 was used for this experiment, as well. The configuration is shown in Figure 7. HDD Config with Storage Footprint = SSD 6 SAS adapters 4 FC adapters 36 x 15K RPM SAS HDDs 80 x 15K RPM SAS HDDs (Hot Data) (Cold Data) Figure 7: HDD Configuration with same Storage Footprint as SSD The end goal of all the experiments was to compare the response times, throughput, space usage, and energy consumption, using SSDs vs HDDs. The experiments were performed by executing a number of different types of OLTP transactions against the database and collecting performance statistics to understand the behavior of the drives. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 9 of 23
  • 10. Results of Experiments 800 HDDs vs. 116 Drives (mix of 36 SSDs and 80 HDDs) For the base configuration with 800 HDDs, the system CPU was 67% busy while the remaining 33% was spent waiting for I/Os to complete. The IOPS per drive for the drives holding the hot tables maxed out at about 220. In comparison, for the configuration using SSDs for the hot tables, the CPU utilization reached over 80% and the IOPS per drive was over 7000. With SSDs, I/Os were serviced much faster, resulting in better storage and application response times. In addition, more of the CPU was freed up to do useful work instead of waiting for I/O. As a result system throughput increased. Figure 8 shows the database transaction response times and relative system throughput for the 800-HDD and the mixed SSD-HDD configurations. The configuration with SSDs achieved a 1.72X improvement in application response times and a 1.65X improvement in transaction throughput over the 800-HDD configuration. Database Transaction Response Time & Throughput 0.04 6 SSD = 1.7X better transaction RT Relatvie System Throughput 0.035 5 Response Time (secs) 0.03 4 0.025 SSD = 1.65X better throughput 0.02 3 0.015 2 0.01 1 0.005 0 0 HDD SSD Figure 8: SSD vs. HDD Database Transaction RT and Throughput The average drive read response times for the 800-HDD and mixed SSD-HDD configurations are shown in Figure 9. The drive read response time improved by 3X when the hot tables were moved to the SSDs (1.7ms for SSDs vs. 5.3ms for HDDs). Read Response Times 6 SSD = 3X better response time 5 Response Time (ms) 4 3 2 1 0 Figure 9: SSD-HDD RT Comparison HDD SSD Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 10 of 23
  • 11. The efficiency of the drives was measured in terms of transaction throughput per drive. For the 800-HDD configuration the throughput per drive was 4.2 and for the SSD configuration it was 137.5. This means that each SSD performed 33 times more work than an HDD, as shown on Figure 10. This disparity in throughput per drive are due to the SSDs being faster than HDDs. Many more HDDs are required to achieve the same throughput as a small number of SSDs. Even with a 20:1 ratio of HDDs to SSDs, the 800- HDD configuration was still bottle-necked by I/O and could only achieve ½ the throughput of the SSD-HDD mixed configuration. Throughput per drive 160 140 Transaction Throughput 120 SSD = 33X HDD 100 80 60 40 20 0 Figure 10: SSD-HDD Throughput Comparison HDD SSD Moving the hot tables to SSDs reduced the number of physical drives required from 720 to 36. This reduction resulted in space usage and energy consumption savings. Figure 11 shows the energy usage is about 90% lower for SSDs. The energy usage was measured at peak throughput for all the storage components. The system energy and AC cooling energy were not included in the energy usage measurement. Figure 12 shows the space reduction between HDDs and SSDs is about 84%. The space was calculated using the total space used by the storage sub-system such as the controllers, drive enclosures, and drives. Watts per throughput Space Usage 200 8 SSD = 84% less space 180 Watts per transaction/second 7 SSD = 90% less energy usage 160 6 140 Rack Unites (U) 5 120 100 4 80 3 60 2 40 1 20 0 0 HDD SSD HDD SSD Figure 11: SSD-HDD Energy Usage Comparison Figure 12: SSD-HDD Space Usage Comparison Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 11 of 23
  • 12. 116 Drives (36 SSDs + 80 HDDs) vs. 116 Drives (36 HDDs + 80 HDDs) In this experiment the number of drives remained the same. Both the SSD configuration and the HDD configuration used 36 drives for the hot data and 80 drives for the cold data. The purpose of the experiment was to do a direct comparison of HDD and SSD performance in a high I/O demanding environment. Focusing the analysis on the most interesting subset of the 116 drive comparison, the 36 HDD vs the 36 SSD which are both running the "hot" data (the tables with the highest amount of IOPS), the following observations were made: For the 36 HDD measurement, the CPU utilization was only 5% and the remaining 95% was spent either waiting on I/Os to complete or idle. The drive read response time was 6.8 ms. The IOPS per drive was about 170. In comparison, the read respond time and IOPS per drive for the 36 SSD measurement was 1.7 ms and 7000, respectively. Figure 13 shows both the relative response times and drive performance comparisons using 116 disks. SSD vs HDD: Relative Read Response Times SSD = 4X better Read RT SSD HDD 0 1 2 3 4 5 SSD vs HDD: Relative Drive Performance IOPS/drive SSD = 40X better SDD HDD transactions/ drive SSD = 42X better 0 10 20 30 40 50 Figure 13: 116 (36 SSDs+80HDDs vs 36 HDDs+80HDDs) : RT & Drive Performance Comparisons Determining Whether an AIX Application Might Benefit from SSDs AIX provides performance tools that can be used to determine if a configuration has hot data that would perform better if moved to SSDs. The most valuable tools for assessing data hot spots are the AIX tools iostat and filemon. In addition, database vendors also provide tools to analyze hot data. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 12 of 23
  • 13. In order to demonstrate the capabilities of these tools, we will compare iostat and filemon data from the 800-drive HDD run and the 116-drive mixed SSD-HDD run. The data will show the I/O performance improvement gained from using SSDs. Identifying Hot Disks The iostat tool can provide a good first level I/O analysis because it provides a high level, real-time view of overall storage performance and is simple to run. To isolate the hot data, look for data that does a high rate of random small block I/O per GB to the HDDs. Running the command "iostat -t" provides CPU utilization details. If there is no I/O wait time, then SSDs will not improve system performance. As shown in Table 1, there was a substantial amount of I/O wait time for the 800-HDD experiment, so there is a big potential for performance improvement from using SSDs. Storage Configuration % iowait 800-HDD experiment 33.2 116 drive mixed SSD - HDD experiment 1.9 Table 1: SSD and HDD iowait Output Running iostat with the "-D" flag as shown in Tables 2 and 3, provides detailed output per logical disk (hdisk), including read and write response times. In order to focus on the hdisks with the hot data, only those that contain the hot data and the database logs (hdisk320-321) are shown below. There are several things to notice here: 1. The total system storage I/O requests or transfers per second (tps) are shown at the top of each report. The tps is a total of reads per second (rps) and writes per second (wps). Note that the tps on the SSD run is about double the tps on the HDD run. 2. HDDs max out at about 200 IOPS. So, look for hdisks that do over 200 IOPS (or tps) per physical drive. • For the 80-HDD configuration, each hdisk consists of 30 physical drives: hdisk178 - hdisk201 are RAID5 arrays, each with 30 x 15K RPM HDDs • For the 116-drive mixed SSD - HDD configuration, each hdisk consists of 6 physical drives: hdisk202 - hdisk207 are RAID5 arrays, each with 6 x SSDs. • Each write to a RAID5 array causes 4 drive I/Os (2 reads and 2 writes) 3. "%tm act" shows the percentage of time where there is at least one I/O request outstanding to that hdisk. We need to look for hdisks that are at least 99% busy. 4. read and write avg serv times indicate the average service time per transfer. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 13 of 23
  • 14. Kbps tps Kb_read Kb_wrtn Physical 326144 77196.3 1892352 1372352 Disks: xfers read write %tm bps tps bread bwrtn rps avg min max wps avg min max Act serv serv serv serv serv serv hdisk178 100 16.1M 3929.7 10.6M 5.5M 2592.7 6 0.1 250.3 1337 2.5 0.2 261.8 hdisk179 100 15.7M 3844.9 10.3M 5.4M 2522.5 5.9 0.1 223.4 1322 3.2 0.2 275.1 hdisk180 99.9 8.8M 2148.8 5.6M 3.2M 1379 4.3 0.1 199.6 769.7 3.7 0.2 123.4 hdisk181 100 9.1M 2216.9 5.8M 3.2M 1423.7 4.1 0.1 214.5 793.2 2 0.2 122.7 hdisk182 99.6 9.1M 2230.8 5.9M 3.2M 1444.4 4.2 0.1 205.1 786.4 2.8 0.2 230.4 hdisk183 100 9.2M 2234 5.9M 3.3M 1433.7 4.1 0.1 220.8 800.3 3.3 0.2 122.7 hdisk184 100 15.7M 3833.4 9.7M 6.0M 2368.7 7.6 0.1 542 1465 2.5 0.2 448.8 hdisk185 100 15.7M 3842 9.7M 6.0M 2380.2 7.3 0.1 330.1 1462 3.3 0.2 280 hdisk186 99.9 9.0M 2193 5.6M 3.3M 1375.8 4.9 0.1 117.4 817.2 3.9 0.2 101.9 hdisk187 99.5 8.9M 2183.9 5.6M 3.3M 1368.5 4 0.1 125.8 815.4 1.9 0.2 91 hdisk188 99.5 9.0M 2208.8 5.6M 3.4M 1378 3.9 0.1 270 830.8 2.8 0.2 91.1 hdisk189 99.8 9.0M 2203.7 5.6M 3.4M 1373.5 3.9 0.1 128 830.2 3.3 0.2 120.6 hdisk190 100 15.4M 3761.9 9.8M 5.6M 2384.6 6.2 0.1 207.2 1377 1.9 0.2 318.1 hdisk191 100 15.4M 3765.6 9.8M 5.6M 2400.3 6.1 0.1 237.8 1365 2.6 0.2 344 hdisk192 99.8 9.1M 2218.2 5.9M 3.2M 1444.1 4.3 0.1 202.1 774.1 2.8 0.2 179.7 hdisk193 99.5 9.2M 2245.8 6.0M 3.2M 1468.5 4.2 0.1 213.6 777.2 1.7 0.2 176.5 hdisk194 99.9 9.5M 2317.2 6.2M 3.3M 1521.3 4.2 0.1 234.6 795.9 2.2 0.2 176.5 hdisk195 99.8 9.5M 2311 6.2M 3.3M 1513 4.2 0.1 195.3 798 2.5 0.2 177 hdisk196 100 15.6M 3802 9.7M 5.9M 2369.6 6.1 0.1 250 1432 1.8 0.2 414.9 hdisk197 100 15.5M 3773.7 9.6M 5.9M 2345.5 6.1 0.1 238 1428 2.5 0.2 330.8 hdisk198 99.8 8.6M 2095 5.3M 3.3M 1289.6 4.4 0.1 228.9 805.4 2.8 0.2 184 hdisk199 99.4 8.6M 2102.5 5.3M 3.4M 1284.6 4.3 0.1 204.8 817.9 1.6 0.2 184 hdisk200 99.5 9.4M 2292.6 6.1M 3.3M 1481.1 4.2 0.1 215.2 811.5 2.1 0.2 181.8 hdisk201 99.8 9.5M 2310.4 6.1M 3.3M 1495.3 4.2 0.1 226.4 815.1 2.4 0.2 181.8 hdisk320 27.3 12.8M 3108.9 0.0 12.8M 0.0 0.0 0.0 0.0 3109 0.1 0.1 15.5 hdisk321 31.4 12.9M 3143.7 0.0 12.9M 0.0 0.0 0.0 0.0 3144 0.1 0.1 16.7 Table 2: 800-HDD Experiment iostat –D Output Kbps tps Kb_read Kb_wrtn Physical 538923 127744 3406752 1991904 Disks: xfers read write %tm bps tps bread bwrtn rps avg min max wps avg min max act serv serv serv serv serv serv hdisk202 100 79.1M 19302.1 53.0M 26.1M 12942.2 1.7 0.1 57 6360 3.9 0.5 49.9 hdisk203 100 76.7M 18736.1 51.1M 25.6M 12476.3 1.7 0.1 61.5 6259.8 3.8 0.5 56.2 hdisk204 100 76.8M 18753.8 51.7M 25.1M 12622.1 1.5 0.1 41.8 6131.7 3.6 0.4 51.3 hdisk205 100 78.9M 19259.9 52.8M 26.1M 12888.9 1.7 0.1 40 6371 3.9 0.6 50.3 hdisk206 100 77.3M 18879.8 51.6M 25.8M 12588.7 1.6 0.1 54.7 6291.1 3.7 0.5 52.1 hdisk207 100 77.2M 18853.8 51.9M 25.3M 12681.5 1.5 0.1 58.4 6172.3 3.6 0.5 59.4 hdisk320 35.4 13.7M 3292.4 0 13.7M 0 0 0 0 3292.4 0.1 0.1 18 hdisk321 36.3 13.6M 3273.4 0 13.6M 0 0 0 0 3273.4 0.1 0.1 18.2 Table 3: Mixed SSD - HDD Experiment iostat –D Output Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 14 of 23
  • 15. Identifying Hot Logical Volumes After using iostat to determine that there are hot hdisks on a system, the next step is to use filemon to find the hot logical volumes (LVs). The LVs map to the database tables. Filemon provides summary and detailed performance reports on files, LVs and Physical Volumes (PVs). The filemon output below includes the LV and PV summary reports and some examples from the detailed LV reports. How to run filemon: Filemon can be run in either online mode or offline mode using a previously collected trace. The offline method, used for this data, is described below. Note that running the AIX trace command can cause significant performance degradation if the system CPU is very busy. This causes some of the SSD throughputs reported by filemon to be lower than those reported by iostat. The HDD results are not affected because there were plenty of spare CPU cycles on that experiment due the substantial I/O wait time. trace -andfp -C all -T 30000000 -L 30000000 –o filename.trc gensyms -F > gensyms.out -F option provides the file, LV, and hdisk names needed for filemon filemon -i trcfile -n gensyms.out -O detailed,all -o filemon.out Filemon’s "Most Active Logical Volumes" table sorts the LVs based on their utilization. LVs with low utilizations typically do not need to be moved to SSDs. LVs with high utilizations are good candidates for further investigation regarding whether they should be moved to SSDs (having a high utilization does not necessarily indicate a performance problem). The 800-HDD "Most Active Logical Volumes" filemon report, depicted in Table 4, shows there are 28 LVs that were at least 91% busy during the trace. The last LV listed is only 68% busy. The rest of the LVs on the system are even less busy and are not shown here. The database tables that were on the 28 busiest LVs for the 800-HDD experiment were all moved to SSDs for the 116-drive SSD experiment. 800-HDD 116 Drives (Mixed SSD - HDD) Most Active Logical Volumes MostActiveLogicalVolumes util #rblk #wblk KB/s volume util #rblk #wblk KB/s volume 1 3592 2248 10521.2 /dev/hddR04V1S 0.98 6544 4888 17968.5 /dev/ssdR02V2S 0.99 5440 2024 13447 /dev/hddR04V2S 0.98 7312 3352 16761.4 /dev/ssdR04V3S 0.99 4200 1288 9887.1 /dev/hddR01V4S 0.98 7568 4136 18396 /dev/ssdR03V4S 0.99 5256 2120 13288.4 /dev/hddR03V3S 0.98 5976 4976 17214.1 /dev/ssdR02V4S 0.99 5816 1080 12423.7 /dev/hddR01V2S 0.97 6984 5816 20118.7 /dev/ssdR02V3S 0.99 5288 1160 11616.6 /dev/hddR01V1S 0.97 5928 5120 17364.9 /dev/ssdR02V1S 0.99 4136 2056 11155.4 /dev/hddR03V4S 0.97 5760 5120 17100.9 /dev/ssdR01V4S 0.99 4992 1552 11789.5 /dev/hddR02V2S 0.97 5152 3600 13756.2 /dev/ssdR04V2S 0.99 6344 1456 14052.3 /dev/hddR02V1S 0.97 5864 3520 14749.5 /dev/ssdR03V3S 0.99 4264 2168 11587.7 /dev/hddR03V2S 0.96 6864 3440 16195.5 /dev/ssdR03V1S 0.99 5096 1208 11357.1 /dev/hddR01V3S 0.96 4864 4136 14146 /dev/ssdR04V4S 0.99 4592 2168 12178.7 /dev/hddR03V1S 0.96 6456 3528 15692.6 /dev/ssdR04V1S 0.99 6680 1240 14268.5 /dev/hddR02V4S 0.95 6768 3968 16874.6 /dev/ssdR03V2S 0.99 4912 1776 12048.9 /dev/hddR04V3S 0.93 5344 5616 17226.6 /dev/ssdR01V3S 0.98 4048 1904 10723 /dev/hddR04V4S 0.91 4880 4912 15390.8 /dev/ssdR01V2S 0.98 4936 1408 11429.2 /dev/hddR02V3S 0.91 3880 4968 13907 /dev/ssdR01V1S Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 15 of 23
  • 16. 0.98 1792 1512 5952.4 /dev/hddR03V1C 0.86 3136 3288 10097.1 /dev/ssdR03V2C 0.98 1608 1256 5159.7 /dev/hddR04V1C 0.81 3320 1176 7066.7 /dev/ssdR02V1C 0.97 1720 1248 5347.1 /dev/hddR04V3C 0.8 2600 2968 8751.6 /dev/ssdR04V2C 0.97 1624 584 3977.9 /dev/hddR02V2C 0.77 2872 3136 9443.2 /dev/ssdR04V1C 0.97 1608 1328 5289.4 /dev/hddR04V2C 0.77 3224 1024 6676.9 /dev/ssdR01V1C 0.97 1736 664 4323.8 /dev/hddR02V3C 0.73 0 14520 22822.1 /dev/dbloglv 0.96 1592 632 4006.7 /dev/hddR01V3C 0.71 1960 3264 8210.9 /dev/ssdR03V3C 0.95 1816 608 4367 /dev/hddR01V2C 0.71 2336 3192 8688.8 /dev/ssdR03V1C 0.94 1624 1368 5390.3 /dev/hddR03V2C 0.68 2328 3016 8399.6 /dev/ssdR04V3C 0.94 1448 728 3920.2 /dev/hddR01V1C 0.68 2712 1176 6111.1 /dev/ssdR02V3C 0.93 1512 760 4093.2 /dev/hddR02V1C 0.66 2352 1096 5419.5 /dev/ssdR01V3C 0.92 1592 1448 5476.8 /dev/hddR03V3C 0.63 2192 1040 5080 /dev/ssdR02V2C 0.68 0 11424 20581.2 /dev/dbloglv 0.61 2016 1224 5092.5 /dev/ssdR01V2C Table 4: 800-HDD and Mixed SSD-HDD filemon Report Detailed Logical Volumes Tables Detailed reports are shown for both a hot LV that is a good candidate to move to an SSD and for the database log LV, which is not a good candidate. The reports are included in Tables 5, 6, 7 and 8. Hot LV details Important things to note here are: 1. The average I/O size is 4KB (8.0 512 byte blocks, which is a good match for SSDs) 2. The I/O is completely random (read and write sequences are equal to the number of reads and write) 3. Read response times are relatively long 4. Average seek distance is very long (20.9GB). VOLUME: /dev/hddR04V1S description: raw VOLUME: /dev/ssdR02V2S description: raw reads: 449 (0 errs) reads: 818 (0 errs) read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 read times (msec): avg 5.801 min 0.118 max 34.264 sdev 4.517 read times (msec): avg 1.030 min 0.314 max 14.616 sdev 1.894 read sequences: 449 read sequences: 818 read seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 read seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 writes: 281 (0 errs) writes: 611 (0 errs) write sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 write sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 write times (msec): avg 1.194 min 0.373 max 4.414 sdev 0.641 write times (msec): avg 3.066 min 0.853 max 18.028 sdev 2.961 write sequences: 281 write sequences: 611 write seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 write seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 seeks: 730 (100.0%) seeks: 1429 (100.0%) seek dist (blks): init 105356576, seek dist (blks): init 15678600, avg 40796294.3 min 14072 max 115550480 sdev 28644621.9 avg 39276756.9 min 18680 max 117667808 sdev time to next req(msec): avg 0.380 min 0.000 max 3.741 sdev 28871870.5 0.559 time to next req(msec): avg 0.222 min 0.000 max 5.564 sdev throughput: 10521.2 KB/sec 0.410 utilization: 1.00 throughput: 17968.5 KB/sec utilization: 0.98 Table 5: 800-HDD Detailed filemon LV Report Table 6: Mixed SSD-HDD Detailed filemon LV Report Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 16 of 23
  • 17. Database log details The database log details are shown here as an example of data that would not benefit from SSDs (the log is on HDDs in both cases): 1. The I/O is very sequential. (Note that the log is striped across two hdisks, which causes filemon to report a substantial number of write sequences) 2. The response times for both runs are very short due to the storage array's write cache 3. Average seek distance is very short (8KB). VOLUME: /dev/dbloglv description: raw VOLUME: /dev/dbloglv description: raw writes: 1424 (0 errs) writes: 1787 (0 errs) write sizes (blks): avg 8.0 min 8 max 16 sdev 0.4 write sizes (blks): avg 8.1 min 8 max 32 sdev 1.3 write times (msec): avg 0.132 min 0.114 max 0.952 sdev 0.035 write times (msec): avg 0.131 min 0.116 max 0.885 sdev 0.033 write sequences: 487 write sequences: 553 write seq. lengths: avg 23.5 min 8 max 56 sdev 5.8 write seq. lengths: avg 26.3 min 8 max 88 sdev 7.9 seeks: 487 (34.2%) seeks: 553 (30.9%) seek dist (blks): init 25305528, seek dist (blks): init 57817232, avg 16.0 min 16 max 16 sdev 0.0 avg 16.0 min 16 max 16 sdev 0.0 time to next req(msec): avg 0.195 min 0.130 max 3.317 sdev 0.137 time to next req(msec): avg 0.178 min 0.131 max 8.299 sdev throughput: 20581.2 KB/sec 0.218 utilization: 0.68 throughput: 22822.1 KB/sec utilization: 0.73 Table 7: 800-HDD Experiment filemon database log Table 8: Mixed SSD-HDD Experiment filemon database log Which LVs Should be Moved to SSD Once the hot LVs are known, use the "lslv" command to find the LV sizes and calculate the IOPS/GB. LVs with the highest IOPS/GB should be moved first. Using DB2 Snapshot to Identify Hot Logical Volumes The DB2 snapshot monitor tool provides another means to identify hot tablespaces that are best candidates to place on SSDs. It is used to capture information about the database and any connected applications at a specific time. DB2 tablespace snapshot provides the following information: • Tablespace Name • Tablespace Page size • Number of used pages • Bufferpool data/index/xda physical reads • Bufferpool read/write time To identify which containers are hot, it is necessary to analyze the following properties: • Access density - which is a function of number of physical I/Os relative to number of used pages in the tablespace. • Access latency - which is a measure of latency for those physical I/Os. • Relative weight for tablespace that is calculated to help prioritize between different tablespaces to place on SSD. This is a function of access density and access latency. • Sequential ratio of accesses – ratio of sequential to random access Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 17 of 23
  • 18. The weighting factor is used to determine which tablespaces are better candidates to place on SSDs. The steps below show how to compute the weighting factor: Total physical I/Os = ( Bufferpool data physical reads + Bufferpool index physical reads + Bufferpool xda physical reads + Bufferpool temporary data physical reads + (Direct reads * 512 )/tablespace page size) Page Velocity = (Total physical I/Os) / (snapshot interval in seconds) Access time = (Total buffer pool read time + Direct reads elapsed time) Access density = (Page Velocity) / (number of used pages in tablespace) Access latency = (Access time) / (Total physical I/Os) Weighting factor = (Access density) * (Access latency) Sequentiality ratio = (Asynchronous pool data page reads + Asynchronous pool index page reads + Asynchronous pool xda page reads)/ (Buffer pool data physical reads+ Buffer pool index physical reads + Bufferpool xda physical reads) When the above information is summarized for all tablespaces based on descending order of weighting factor, those tablespaces that have higher weighting factor are better candidate for SSDs. Tablespaces that have lower sequential ratio are better candidates for HDDs. Table 9 shows an example of some data from DB2 tablespaces snapshot taken from the 800-HDD configuration: db2 get snapshot for tablespaces on DBNAME Tablespace name TS_S_13 TS_OL_1 Tablespace Page size (bytes) 4096 4096 Number of used pages 15001088 8561920 Buffer pool data physical reads 10162297 458610 Buffer pool temporary data physical reads 0 0 Asynchronous pool data page reads 0 0 Buffer pool index physical reads 0 0 Buffer pool temporary index physical reads 0 0 Asynchronous pool index page reads 0 0 Buffer pool xda physical reads 0 0 Buffer pool temporary xda physical reads 0 0 Asynchronous pool xda page reads 0 0 Total buffer pool read time (millisec) 26705251 1189809 Total elapsed asynchronous read time 0 0 Direct reads 0 0 Direct reads elapsed time (ms) 0 0 Table 9: Tablespace snapshot for TS_S_13 and TS_OL_1 Summary of tablespace weighting factor for TS_S_13 and TS_OL_1 is as follows: Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 18 of 23
  • 19. TS_S_13 TS_OL_1 Total physical I/Os 21099197 458610 Page Velocity 2109919.7 45861 Access time 26705251 1189809 Access density 1.406 0.053 Access latency 1.265 2.504 Weighting factor 1.779 0.133 Sequentiality ratio 0 0 Table 10: Tablespace weighting factor for TS_S_13 and TS_OL_1 Tablespace TS_S_13 has much higher “Weighting factor” than tablespace TS_OL_1. Therefore, it’s a better candidate for moving to a SSD. Migration Tools As discussed earlier in this paper, after identifying the hot tablespaces using iostat, filemon or DB2 tablespace snapshot, the next step is to move the hot tablespaces from HDDs to SSDs. There are several tools available for data migration. This paper focuses on using IBM Softek Transparent Data Migration Facility (TDMF) and AIX migratepv. Softek TDMF Softek TDMF allows customers to move data between unlike storage vendors and switch over to new storage with no interruption to business generating applications. Softek TDMF is host-based software that moves data at the block level between logical volumes without interrupting reads and writes to those volumes. Figure 14 shows the Softek architecture which consists of a command line interface (CLI), filter driver, and configuration files. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 19 of 23
  • 20. source target HDD SSD Figure 14: Softek TDMF Migration Tool Architecture There are two ways to migrate data using Softek TDMF: Dynamic activation and auto switchover. Both methods support migration with no disruption to the application. Below is an example of how to migrate data using auto switchover mode. This mode can be used to migrate data anytime with minimal performance impact. • Step 1: Creating a migration volume and associating it to a valid source volume # tdmf create tR01V3S /dev/hddR01V3S • Step 2: Adding a target volume to migration volume # tdmf add tR01V3S /dev/ssdR01V3S • Step 3: Starting migration and auto switchover after migration is done # tdmf copy -x tR01V3S • Step 4: Remove the old source volume, new source volume takes over # tdmf remove tR01V3S Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 20 of 23
  • 21. With auto switchover mode, the source volume (old volume on HDD) is removed after the data is migrated to the target volume (new volume on SSD). Mirrored writes to both the old and new volumes are no longer required. Figure 15 shows the progress of a migration of hot tablespaces from HDDs to SSDs while an OLTP application is running. Before the migration started, the database transaction response time was about 0.04 seconds. During the migration, the application ran uninterrupted. The response time increased to 0.07 seconds. After the migration completed, the database transaction response time reduced to 0.02 seconds and system throughput almost doubled. Data Migration Performance Impact 6000 Hot data Hot data 5000 on HDD on SSD Throughput (tps) 4000 Data Migration 3000 2000 1000 0 Interval (10s) Figure 15: Performance Impact of Migrating Data from HDD to SSD Migratepv Another tool that can be used when migrating data is the AIX command migratepv, which moves physical partitions from one AIX hdisk to one or more hdisks. The following is an example of using the migratepv command to move the partitions from hdisk20 to hdisk30: migratepv hdisk20 hdisk30 The migrate function works by creating a mirror of the logical volumes involved and then resynchronizing the logical volumes. When the migration is in progress the volume group is locked so only one hdisk can be migrated at a time. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 21 of 23
  • 22. Conclusion Customer I/O demands have outpaced the performance capabilities of traditional hard disk drives. Latencies associated with spinning platters and moving arms limit the speed of HDD data access. SSDs’ near instantaneous data access removes this I/O bottleneck, creating a paradigm shift in I/O performance. Applications throttled by poor I/O performance can benefit greatly from SSDs. For these I/O-intensive applications, it is no longer necessary to “short stroke” drives in order to achieve good I/O performance. As demonstrated in our experiments, SSDs result in a substantial improvement in I/O performance which translates to increased business output, reduced energy consumption, reduced floor space requirements, and cost savings. Our experiments showed the following SSD benefits when comparing 800 HDDs with 36 SSDs and 80 HDDs: • 1.65X system throughput improvement • 1.72X application response time improvement • 3X improvement in drive read response time • 33X improvement in throughput per drive • 90% reduction in energy consumption • 84% reduction in floor space requirements Additional experiments with an equal number of HDDs and SSDs resulted in 4X better drive read response times and 42X better throughput per drive. SSDs can deliver significant business value to a broad spectrum of industries facing various business challenges. We illustrated two workloads representing real world customer applications in the financial sector and consumer products industry. Both scenarios showed advantages for migrating to SSDs, including improved transaction response times, increased numbers of users, and storage cost savings. The superior performance of SSDs must be balanced with cost. Multi-tiered storage solutions can provide that balance. An application’s “hot” data can be moved to SSDs, while less active data can remain on lower cost HDDs. IBM provides tooling and services to assist customers in characterizing their data access patterns and helping them make smart data placement choices for the highest performing – lowest cost storage solution. Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 22 of 23
  • 23. For More Information IBM Power Servers ibm.com/systems/p IBM Storage ibm.com/services/storage The provision of the information contained herein is not Notices and Disclaimer intended to, and does not; grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: Copyright © 2009 by International Business Machines Corporation. IBM Director of Licensing IBM Corporation No part of this document may be reproduced or transmitted in North Castle Drive any form without written permission from IBM Corporation. Armonk, NY 10504-1785 U.S.A. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without IBM, IBM Power, AIX, DB2, DB2 9.5, Enterprise Storage notice. This information may include technical inaccuracies or Server, FlashCopy, TotalStorage, are trademarks of typographical errors. IBM may make improvements and/or International Business Machines Corporation in the United changes in the product(s) and/or programs(s) at any time States, other countries, or both. without notice. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all Other company, product or service names may be trademarks countries in which IBM operates or does business. or service marks of others THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON- INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. POW03025USEN-00 Driving Business Value on Power Systems with Solid State Drives © Copyright IBM Corporation 2009 All Rights Reserved Page 23 of 23