SlideShare una empresa de Scribd logo
1 de 66
Descargar para leer sin conexión
Virtualizing Oracle Databases with VMware

 Richard McDougall
  Chief Performance Architect




                                    © 2009 VMware Inc. All rights reserved
Agenda

 VMware Platform Introduction
 Why Virtualize Databases?
 Virtualization Technical Primer
 Performance Studies and Proof Points
 Deploying Databases in Virtual Environments
 •  Consolidation and Sizing
 •  
       VMware Platform Introduction
   
       Why
   
       Virtualization Technical Primer
   
VMware Virtualization Basics
VMotion Technology




   VMotion Technology moves running virtual machines from one
  host to another while maintaining continuous service availability
  - Enables Resource Pools
  - Enables High Availability
Resource Controls

 Reservation
 •  Minimum service level guarantee (in MHz)            Total Mhz

 •  Even when system is overcommitted

 •  Needs to pass admission control
                                                            Limit
 Shares
 •  CPU entitlement is directly proportional to VM's   Shares
   shares and depends on the total number of            apply
                                                        here
   shares issued

 •  Abstract number, only ratio matters
                                                       Reservation
 Limit
 •  Absolute upper bound on CPU entitlement (in MHz)
                                                           0 Mhz
 •  Even when system is not overcommitted
Resource Control Example


             Add 2nd VM               Add 3rd VM
  100%      ► with same        50%   ►with same
               number                   number              33.3%
              of shares                of shares


                                                            ▼
                                                   Set 3rd VM’s limit to
                                                   25% of total capacity



 FAILED     Add 4th VM               Set 1st VM’s
            with reservation         reservation to
ADMISSION   set to 75% of ◄                       ◄
                                     50% of total           37.5%
CONTROL     total capacity     50%   capacity
Resource Pools

 Motivation
 •  Allocate aggregate resources for sets of VMs
 •  Isolation between pools, sharing within pools
 •  Flexible hierarchical organization
                                                                          Admin
 •  Access control and delegation
 What is a resource pool?
 •  Abstract object with permissions                L: not set Pool   A           Pool B   L: 2000Mhz
                                                    R: 600Mhz                              R: not set
 •  Reservation, limit, and shares                  S: 60 shares                           S: 40 shares

 •  Parent pool, child pools and VMs
 •  Can be used on a stand-alone
   host or in a cluster (group of hosts)               VM1     VM2        VM3      VM4



                                                                           60%           40%
Example migration scenario 4_4_0_0 with DRS




                                                     1           2
                                                                                                                                        HP
                                                                                                                                     ProLiant
                                                                                                                                                  vCenter                              1          2
                                                                                                                                                                                                                                                                  HP
                                                                                                                                                                                                                                                               ProLiant
                                                                 OVER                                                                DL380G6                                                      OVER                                                         DL380G6
                                        1            2           TEMP                 1       5                                                                         1              2          TEMP                       1       5
                              POWER         POWER                                                                                                                 POWER        POWER
                              SUPPLY        SUPPLY               INTER                                        PL A Y ER                                           SUPPLY       SUPPLY             INTER                                      PL A Y ER
                                                                 LOCK                                                                                                                             LOCK
                              POWER CAP                                                                                                                           POWER CAP
                                                                  DIMMS                                                                                                                            DIMMS
                           1A 3G 5E 7C 9i                 9i 7C 5E 3G 1A                                                                                         1A 3G 5E 7C 9i            9i 7C 5E 3G 1A

                                                                                      2       6                                                                                                                              2       6
                              2D 4B 6H 8F                  8F 6H 4B 2D                                                                                            2D 4B 6H 8F               8F 6H 4B 2D
                                                ONLINE                                                                                                                             ONLINE
                                        1       SPARE                     2                                                                                                1       SPARE                    2
                               PROC                                  PROC                                                                                          PROC                               PROC
                                                    MIRROR                                                                                                                         MIRROR
                           FANS                                                                                                                                  FANS
                                                                                      3       7                                                                                                                              3       7
                                1           2         3      4        5       6                                                                                     1          2       3     4         5         6




                                                                                      4       8                                                                                                                              4       8




                                                                                                                                                Imbalanced
                                                                                                                                                 Balanced
                                                                                                                                                Cluster
                                                                                                                                                 Cluster




    POWER
    SUPPLY
          1


    POWER CAP
                 POWER
                 SUPPLY
                       1

                       2
                                    2
                                   OVER
                                   TEMP

                                    INTER
                                    LOCK
                                   DIMMS
                                                                                  1       5
                                                                                                  PL A Y ER
                                                                                                                             HP
                                                                                                                          ProLiant
                                                                                                                          DL380G6
                                                                                                                                                   Heavy Load            POWER
                                                                                                                                                                         SUPPLY

                                                                                                                                                                         POWER CAP
                                                                                                                                                                                   1




                                                                                                                                                                        1A 3G 5E 7C 9i
                                                                                                                                                                                       POWER
                                                                                                                                                                                       SUPPLY
                                                                                                                                                                                             1

                                                                                                                                                                                             2
                                                                                                                                                                                                           2
                                                                                                                                                                                                           OVER
                                                                                                                                                                                                           TEMP

                                                                                                                                                                                                           INTER
                                                                                                                                                                                                           LOCK
                                                                                                                                                                                                          DIMMS
                                                                                                                                                                                                  9i 7C 5E 3G 1A
                                                                                                                                                                                                                                 1       5
                                                                                                                                                                                                                                                   PL A Y ER
                                                                                                                                                                                                                                                                    HP
                                                                                                                                                                                                                                                                 ProLiant
                                                                                                                                                                                                                                                                 DL380G6




   1A 3G 5E 7C 9i          9i 7C 5E 3G 1A                                                                                                                                                                                        2       6

                                                                                  2       6                                                                                 2D 4B 6H 8F               8F 6H 4B 2D
                                                                                                                                                                                           ONLINE
                                                                                                                                                                                   1       SPARE                     2
    2D 4B 6H 8F               8F 6H 4B 2D
                     ONLINE                                                                                                                                                    PROC                             PROC
             1                                  2                                                                                                                                           MIRROR
                     SPARE                                                                                                                                              FANS
     PROC                               PROC
                                                                                                                                                                                                                                 3       7
                     MIRROR                                                                                                                                                    1       2      3        4         5       6
   FANS
                                                                                  3       7
      1          2     3       4            5        6



                                                                                                                                                                                                                                 4       8

                                                                                  4       8




                                                                                                                                                  Lighter Load
DRS Scalability – Transactions per minute
        (Higher the better)




                                                   Transactions per minute - DRS vs. No DRS                   No DRS      DRS
                           Already balanced
                           So, fewer gains
                                                                                            Higher gains (> 40%)
                                                                                            with more imbalance
                         140000
                         130000
                         120000
Transaction per minute




                         110000
                         100000
                          90000
                          80000
                          70000
                          60000
                          50000
                          40000
                                  2_2_2_2     3_2_2_1   3_3_1_1   3_3_2_0   4_2_1_1   4_2_2_0   4_3_1_0   4_4_0_0   5_3_0_0
                                                                       Run Scenario
DRS Scalability – Application Response Time
     (Lower the better)




                                                    Transaction Response Time - DRS vs. No DRS       No DRS   DRS

                                 70.00

                                 60.00
Transaction Response time (ms)




                                 50.00


                                 40.00


                                 30.00

                                 20.00


                                 10.00


                                  0.00
                                         2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0
                                                                      Run Scenario
VMware HA




                               VMs Reboot

            App      App            App   App
            HA HA
            OS       OS             OS    OS




                  VMware ESX     VMware ESX
VMware Fault Tolerance




                                              No Reboot –
                                            Seamless Cutover


                          App        App
                                FT
                          OS         OS




                 VMware ESX                VMware ESX
vApp: The application of the cloud

  An uplifting of a virtualized workload
  •  VM = Virtualized Hardware Box
  •  App = Virtualized Software Solution
  •  Takes the benefits of virtualization: encapsulation, isolation and mobility higher up the stack

  Properties:
                                                                          Policies
  •  Comprised of one or more VMs
    (may be multi-tier applications)                             1.  Product: eCommerce
                                                                 2.  Topology
  •  Encapsulates requirements on the                            3.  Resources Req: CPU,
                                                                     Mem, Disk, Bandwidth
    deployment environment
                                                                 4.  Only port 80 is used
  •  Distributed as an OVF package                               5.  DR RPO: 1 hour
                                                                 6.  VRM: Encrypt w/ SHA-1
  Built by:                                                      7.  Decommission in 2 month
                                                                                  Websphere
  •  ISVs / Virtual Appliance Vendors                                Tomcat                   Exchange


  •  IT administrators
  •  SI/VARs
                                                                                       SAP
The Progression of Virtualization to Cloud



                 VMware ESX®                                 2009
VMware           Server
                  Virtualization
 Workstation
Virtualization
                                         2003
                                                     VMware vSphere™


                       2001                         Complete Virtualization
                                   VMware
                                   Infrastructure    Platform
                                                    From Desktop
                                   Virtual          through the Datacenter…
       1998                         Resource        to the Cloud
                                    Pools


  14
Datacenter of the Future – private cloud

 •  On-demand capacity

 •  Pooling, load balancing of server, storage, network

 •  Built-in availability, security and scalability




Resource Pools
                                                                    A   Compute
                                                                    P   factory
vSphere          vSphere          vSphere                 vSphere
                                                                    I
vSphere 4.0 – The Most Complete Virtualization Platform


                                         •  Firewall
                    •  Clustering
                                         •  Anti-virus             Dynamic Resource
                    •  Data Protection
                                         •  Intrusion Prevention        Sizing
                    •  Fault Tolerance
                                         •  Intrusion Detection
 Application
 Services

                      Availability            Security               Scalability


      vSphere 4.0

                       vCompute               vStorage               vNetwork

 Infrastructure
 Services
                    •  Hardware Assist   •  Storage
                                            Management
                    •  Enhanced Live                                   Network
                                            & Replication
                       Migration                                     Management
                       Compatibility     •  Storage Virtual
                                            Appliances
Business-Critical Application Momentum

         % of customers running apps in production on VMware

                      56%               53%
                                                                                            50%

      36%                                                 41%             34%
                                                                                                             24%              27%


    MS                 MS            MS        Oracle                    Oracle           IBM                IBM             SAP
 Exchange              SQL        SharePoint Middleware                   DB           WebSphere             DB2
 Source: VMware customer survey, September 2008, sample size 1038
 Data: Within subset of VMware customers running a specific app, % that have at least one instance of that app in production in a VM




     In a recent Gartner poll, 73% of customers claimed to use x86
     virtualization for mission critical applications in production
     Source: Gartner IOM Conference (June 2008)
     “Linux and Windows Server Virtualization Is Picking Up Steam” (ID Number: G00161702)
Agenda

 VMware Platform Introduction
 Why Virtualize Databases?
 Virtualization Technical Primer
 Performance Studies and Proof Points
 Deploying Databases in Virtual Environments
 •  Picking a Hardware Platform
 •  Configuring Storage
 •  Configuring the Virtual Machine
 •  OS Choices and Tuning
 •  Database Configuration
 •  Performance Monitoring
Provision DB On-Demand

                                            Pre-Configured vApps
      Database           Database            " Standardize on
SQL                SQL     Enterprise Ed.       optimal app & OS
 OS
         4 vCPU    OS
                           4 vCPU
         4 GB              4 GB                 configurations
                                             " Minimize configuration
                                                drift and errors
      Accelerate         Faster service      " Support multi-tier Apps
      dev & test         availability




         Lab        Production              Provision On Demand
                                             " Accelerate app
                                               development
                                             " Faster service
                                               availability
Databases: Why Use VMs Rather than DB Virtualization?

 Virtualization at hypervisor level provides the best abstraction
 •  Each DBA has their own hardened, isolated, managed sandbox
 Strong Isolation
 •  Security
 •  Performance/Resources
 •  Configuration
 •  Fault Isolation
 Scalable Performance
 •  Low-overhead virtual Database performance
 •  Efficiently Stack Databases per-host
Agenda

 VMware Platform Introduction
 Why Virtualize Databases?
 Virtualization Technical Primer
 Performance Studies and Proof Points
 Deploying Databases in Virtual Environments
 •  Picking a Hardware Platform
 •  Configuring Storage
 •  Configuring the Virtual Machine
 •  OS Choices and Tuning
 •  Database Configuration
 •  Performance Monitoring
VMware ESX Architecture

                                                                    CPU is controlled by
                                                                    scheduler and virtualized
                                                       File
                                                      System
                                                                    by monitor
                                       TCP/IP
Guest        Guest
                                                                    Monitor supports:
                                                                    ! BT (Binary Translation)
                                                                    ! HW (Hardware assist)
 Monitor                    Monitor (BT, HW, PV)
                                                                    ! PV (Paravirtualization)
                                     Virtual NIC     Virtual SCSI
                                                                    Memory is allocated by the
                        Memory
 VMkernel
            Scheduler   Allocator
                                    Virtual Switch   File System    VMkernel and virtualized
                                                                    by the monitor
                                    NIC Drivers      I/O Drivers



                                                                    Network and I/O devices
Physical                                                            are emulated and proxied
Hardware
                                                                    though native device
                                                                    drivers
Agenda

 VMware Platform Introduction
 Why Virtualize Databases?
 Virtualization Technical Primer
 Performance Studies and Proof Points
 Deploying Databases in Virtual Environments
 •  Picking a Hardware Platform
 •  Configuring Storage
 •  Configuring the Virtual Machine
 •  OS Choices and Tuning
 •  Database Configuration
 •  Performance Monitoring
Evolution of Performance for Large Apps on ESX

           100%
Mission
Critical
Apps               ESX 2.x         VI 3.0               VI 3.5                  vSphere 4.0


                                                                            Overhead:2-15%
                  Overhead:30-60%Overhead:20-40%        Overhead:10-30%
                                                                            VCPUs:8
                  VCPUs: 2       VCPUs:2                VCPUs:4
                                                                            VM RAM:255GB
                  VM RAM:3.6 GB VM RAM:16 GB            VM RAM:64GB
General                                                                     Phys RAM:1 TB
Population        Phys RAM:64GB Phys RAM:64GB           Phys RAM:256GB
Of                                                                          PCPUs:64 core
Apps              PCPUs:16 core    PCPUs:16 core        PCPUs:64 core
                                                                            IOPS:350,000
                  IOPS:<10,000     IOPS:10,000          IOPS:100,000
                                                                            N/W:28 Gb/s
                  N/W:380 Mb/s     N/W:800 Mb/s         N/W:9 Gb/s
                                                                            64-bit OS Support
                  Monitor Type:    Gen-1                64-bit OS Support
                                                                            320 VMs per host
                  Binary TranslationHW Virtualization   Gen-2 HW
                                                                            512 vCPUs per host
                                   Monitor Type:        Virtualization
                                                                            Monitor Type: EPT
                                   VT / SVM             Monitor Type: NPT
                             Ability to satisfy Performance Demands
Can I virtualize CPU Intensive Applications?
VMware ESX 3.x compared to Native

SPECcpu results covered by O.Agesen and K.Adams Paper

Websphere results published jointly by IBM/VMware

SPECjbb results from recent internal measurements




              Most CPU intensive applications have very low overhead
Debunking the myth: High Throughput, Low Overhead I/O

 Maximum reported storage:
 365K IOPS
• 100K on VI3
 Maximum reported network:
 16 Gb/s
• Measured on VI3
Can I Virtualize High Networking I/O Applications?




    Overall response time is lower when CPU utilization is less than 100% due to multi-core offload
Enterprise Workload Demands vs. Capabilities

                   Workload Requires                vSphere 4
Oracle 11g         8vcpus for 95% of DBs            8vcpus per VM
                   64GB for 95% of DBs              256GB per VM
                   60k IOPS max for OLTP @ 8vcpus   120k IOPS per VM
                   77Mbits/sec for OLTP @ 8vcpus    9900Mbits/sec per VM

SQLserver          8vcpus for 95% of DBs            8vcpus per VM
                   64GB @ 8vcpus                    256GB per VM
                   25kIOPS max for OLTP @ 8vcpus    120k IOPS per VM
                   115Mbits/sec for OLTP @ 8vcpus   9900Mbits/sec per VM

SAP SD             8vcpus for 90% of SAP Installs   8vcpus per VM
                   24GB @ 8vcpus                    256GB per VM
                   1k IOPS @ 8vcpus                 120k IOPS per VM
                   115Mbits/sec for OLTP @ 8vcpus   9900Mbits/sec per VM

Exchange           4cpus per VM, Multiple VMs       8vcpus per VM
                   16GB @ 4vcpus                    256GB per VM
                   1000 IOPS for 2000 users         120k IOPS per VM
                   8Mbits/sec for 2000 users        9900Mbits/sec per VM

Apache SPECweb     2-4cpus per VM, Multiple VMs     8vcpus per VM
                   8GB @ 4vcpus                     256GB per VM
                   100IOPS for 2000 users           120k IOPS per VM
                   3Gbits/sec for 2000 users        9900Mbits/sec per VM
Measuring the Performance of DB Virtualization




                                                 Throughput
                                                  Delivered

                                                  Minimal
                                                 Overheads
How large is your database instance? (one VM shown)
IO In Action: Oracle/TPC-C*


"   ESX achieves 85% of
    native performance with                        Na1ve#    VM#        58000
                                                                        IOPS
    an industry standard                      8#
    OLTP workload on an 8-




                              Scaling#Ra1o#
    vCPU VM                                   6#

"   1.9x increase in
    throughput with each                      4#
    doubling of vCPUs
                                              2#

                                              0#
                                                   1#       2#     4#     8#
                                                            v/p#CPUs#
Eight vCPU Oracle System Characteristics


                   Metric                                       8 vcpu VM


    Business transactions per minute                             250,000

                 Disk IOPS                                         60,000

              Disk Bandwidth                                     258 MB/s

           Network Packets/sec                                     27,000

            Network Throughput                                    77 Mb/s




 * Our benchmark was a fair-use implementation of the TPC-C business model;
 our results are not TPC-C compliant results, and not comparable to official TPC-C results
Oracle/TPC-C* Experimental Details

 Host was an 8 CPU system with an Xeon 5500
 OLTP Benchmark: fair-use implementation of TPC-C workload
 Software stack includes: RHEL5.1, Oracle 11g R1, internal build of
 ESX (ESX 4.0 RC)
 Were there many Tweaks in getting this result? Not really…
 •  ESX development build with these features
   !  Async I/O, pvscsi driver, virtual Interrupt coalescing, topology-aware scheduling
   !  EPT: H/W MMU enabled processor
 •  The only ESX “tunable” applied: static vmxnet TX coalescing
   !  3% improvement in performance
VMware vSphere enables you to use all those cores…


                                          VMWare ESX
                                          Scaling:
                                          Keeping up with core
                                          counts
                                         Virtualization provides a
                                         means to exploit
                                         the hardware’s
                                         increasing parallelism




                                         Most applications
                                         don’t scale beyond
                                         4/8 way
“Bonus” Memory During Consolidation: Sharing!


                                       VM 1   VM 2    VM 3
 Content-based
 •  Hint (hash of page content)
   generated for 4K pages
 •  Hint is used for a match
                                              Hyper
 •  If matched, perform bit by bit            visor
   comparison

 COW (Copy-on-Write)
                                       VM 1   VM 2    VM 3
 •  Shared pages are marked read-
   only
 •  Write to the page breaks sharing

                                              Hyper
                                              visor
Multi-VM Performance: DVD-Rental Workload

!    Simulate a large multi-tier application with RDBMS
     •  Simulates DVD store transactions
     •  Java client tier
     •  Microsoft SQLServer and Oracle Database




                 SQLserver:                Oracle:
                 Dell PE2950               Sun 16-core x4600 M2
                 Quad Core Xeon            VMware ESX 3.5            EMC CLARiiON CX-340
                 2 x Intel X5450           Oracle 10G R2
                 32GB RAM                  RHEL4, Update 4, 64-bit
Consolidating Multiple Oracle VMs


                       Aggregate TPM vs. Number of VMs
                                                                    Scaling to
                                                                    16 Cores,
   45000                                                     100



   40000                                                     90
                                                                   256GB RAM!
                                                             80
   35000

                                                             70
   30000

                                                             60
                                                                       Aggregate TPM
   25000

                                                             50        CPU Utilization

   20000
                                                             40

   15000
                                                             30

   10000
                                                             20


   5000                                                      10


      0                                                      0
           1   2   3          4        5        6        7




                                    # of VMs

Average of 1GB Memory Saved per instanced from page sharing
Oracle Performance (Response time)


                                                        Average response time vs. Number of VMs

                                         0.20                                                 100
                                                                                                    Average response
                                                                                                    time
                                         0.18                                                 90
                                                                                                    CPU Utilization

                                         0.16                                                 80
        Aggregate Response Time (secs)




                                         0.14                                                 70

                                         0.12                                                 60

                                         0.10                                                 50

                                         0.08                                                 40

                                         0.06                                                 30

                                         0.04                                                 20

                                         0.02                                                 10

                                         0.00                                                 0
                                                1   2   3       4          5    6        7



                                                                     # of VMs


 !    Oracle scales very well on ESX in consolidation scenarios
 !    Efficient, guaranteed resource allocation to individual Virtual Machine
Agenda

 VMware Platform Introduction
 Why Virtualize Databases?
 Virtualization Technical Primer
 Performance Studies and Proof Points
 Deploying Databases in Virtual Environments
 •  Consolidation and Sizing
 •  Picking a Hardware Platform
 •  Configuring Storage
 •  Configuring the Virtual Machine
 •  OS Choices and Tuning
 •  Database Configuration
 •  Performance Monitoring
General Best Practices for Virtualizing DBs
 Characterize DBs into three rough groups
  •  Green DBs – typically 70%
     ! Ideal candidate for virtualization:
      -  Well tuned and modest CPU consumption
      -  Less than 1000 IOPS, 4 cores
 •  Yellow DBs – typically 25%
    ! Likely candidate for virtualization
      -  May need some SQL tuning and monitoring to understand CPU and I/O
         requirements
      -  4-8 cores, >1000 IOPS
      -  Storage I/O planning and configuration required
 •  Red DBs – typically 5%
    ! Unlikely candidates until larger VMs available
    ! Consumes more than 8 physical cores
    ! Not a lot of SQL tuning to be done
Consolidation and Sizing
                                                                           CPU Utilization Distribution

                                                              100000



                                                               10000

  Consolidation targets are often




                                          Number of Systems
  <30% Utilized                                                 1000




  "   Windows average utilization: 5-8%                          100



  "   Linux/Unix average: 10-35%                                  10



                                                                   1
                                                                       0   20          40         60      80   100

                                                                                     % CPU Utilization
Sizing and Requirements

 Virtual Machine sizing is different to Physical
 •  Don’t just take the #cpus in the physical system as the vcpu requirement
 •  Many Physical systems are sized for the peak utilization for with ample headroom for
   future growth
 •  As a result, utilization is often very low in physical systems
 •  With virtual machines, it’s not necessary to build headroom
 •  For example, many databases running on 4-cpu systems can easily run in a 2-vcpu
   guest

 Moving of older RISC/SPARC machines to virtual x86
 •  Even that large older generation SPARC may be a good candidate…
 •  48 x 1.2Ghz SPARC cores = 1 x 8 core Nehalem VM
 •  Since most large SPARC machines are consolidated already, it’s likely that your larger
   databases can run inside a VM
Picking Hardware: Recent Hardware has Lower Overhead



                 Intel Architecture VMEXIT Latencies
  1400

  1200

  1000                                                Latency (cycles)
   800

    600

    400

    200

         0
             Prescott
                        Cedar Mill
                                     Merom
                                             Penryn
                                                           Nehalem




              HW virtualization support improving from CPU generation to
                                      generation
Use Intel Nehalem or AMD Barcelona, or later…

                                           AMD RVI Speedup
 Hardware memory                  1.6
 management units
 (MMU) improve                    1.4
 efficiency                       1.2
 •  AMD RVI currently available    1
 •  Dramatic gains can be seen    0.8
 But some workloads               0.6
 see little or no value
                                  0.4
 •  And a small few actually
  slow down                       0.2

                                   0
                                        SQL Server Citrix XenApp   Apache
                                                                   Compile
Databases: Top Ten Tuning Recommendations

 1.    Optimize Storage Layout, # of Disk Spindles
 2.    Use 64-bit Database
 3.    Add enough memory to cache DB, reduce I/O
 4.    Optimize Storage Layout, # of Disk Spindles
 5.    Use Direct-IO high performance un-cached path in the
       Guest Operating System
 6.  Use Asynchronous I/O to reduce system calls
 7.  Optimize Storage Layout, # of Disk Spindles
 8.  Use Large MMU Pages
 9.  Use the latest H/W – with AMD RVI or Intel EPT
 10. Optimize Storage Layout, # of Disk Spindles
Databases: Workload Considerations

 OLTP                                              DSS

 Short Transactions                              Long Transactions
 Limited number of standardized queries Complex queries
  Small amounts of data accessed                 Large amounts of data accessed
                                                 Combines data from different
   Uses data from only one source                sources
    I/O Profile                                   I/O Profile
      •  Small Synchronous reads/writes (2k->8k)    •  Large, Sequential I/Os (up to 1MB)
      •  Heavy latency-sensitive log I/O            •  Extreme Bandwidth Required
     Memory and I/O intensive                       •  Heavy ready traffic against data
                                                      volumes
                                                   •  Little log traffic
                                                  CPU, Memory and I/O intensive
                                                  Indexing enables higher performance
Databases: Storage Configuration

 Storage considerations
 •  VMFS or RDM
 •  Fibre Channel, NFS or iSCSI
 •  Partition Alignment
 •  Multiple storage paths
 OS/App, Data, Transaction Log and TempDB on separate physical
 spindles
 RAID 10 or RAID5 for Data, RAID 1 for logs
 Queue depth and Controller Cache Settings
 TempDB optimization
Disk Fundamentals

 Databases are mostly random I/O access patterns
 Accesses to disk are dominated by seek/rotate
 •  10k RPM Disks: 150 IOPS max, ~80 IOPS Nominal
 •  15k RPM Disks: 250 IOPS max, ~120 IOPS Nominal
 Database Storage Performance is controlled by two primary factors
 •  Size and configuration of cache(s)
 •  Number of physical disks at the
  back-end
Disk Performance

 Higher sequential
 performance
 (bandwidth) on the
 outer tracks
Databases: Storage Hierarchy



                                "   In a recent study, we scaled up to
         Database Cache             320,000 IOPS to an EMC array
                                    from a single ESX server.
        Guest OS
         Cache
                                     "   8K Read/Write Mix
                                "   Cache as much as possible in
                     /dev/hda
                                    caches
             Controller
              Cache             "   Q: What’s the impact on the
                                    number of disks if we improve
                                    cache hit rates from 90% to 95%?
                                     "   10 in 100 => 5 in 100…
                                     "   #of disks reduced by 2x!
Storage – VMFS or RDM

                                                    Guest              Guest
                                                     OS                 OS
                     Guest
                      OS                       /dev/hda                     /dev/hda

                          /dev/hda       VMFS     database1.vmdk   database2.vmdk




                              FC LUN                                   FC or iSCSI
 RAW                                    VMFS                           LUN

 RAW provides direct access to          Leverage templates and quick
                                       provisioning
a LUN from within the VM
                                        Fewer LUNs means you don’t have to
 Allows portability between physical   watch Heap
and virtual                             Scales better with Consolidated
 RAW means more LUNs                   Backup

  •  More provisioning time             Preferred Method

 Advanced features still work
Best Practices: VMFS or RDM Performance is similar
Databases: Typical I/O Architecture



                           Database Cache

                                  2k,8k,16k x n   2k, 8k, 16k x n

       512->1MB                                    DB
                   Log             DB
                  Writes          Writes          Reads


                            File System


                                           FS Cache
Know your I/O: Use a top-down Latency analysis technique



             Application


                File
                                                       A = Application Latency
 Guest         System                              A
                                                       R = Perfmon
              I/O Drivers     Windows          R       Physical Disk
                            Device Queue               “Disk Secs/transfer”

                                           S           S = Windows
                                                       Physical Disk Service
                                                       Time
            Virtual SCSI
                                      G
                                                       G = Guest Latency
 VMkernel    File System
                                K
                                                       K = ESX Kernel



                                D                      D = Device Latency
Checking for Disk Bottlenecks

 Disk latency issues are visible from Oracle stats
 •  Enable statspack
 •  Review top latency events




 Top 5 Timed Events
                                                          % Total

 Event                          Waits       Time (s)     Ela Time

 --------------------------- ------------ ----------- -----------
 db file sequential read       2,598        7,146           48.54
 db file scattered read       25,519        3,246           22.04
 library cache load lock         673        1,363            9.26
 CPU time                      2,154          934            7.83
 log file parallel write      19,157          837            5.68
Oracle File System Sync vs DIO
Oracle DIO vs. RAW
Direct I/O

    Guest-OS Level Option for Bypassing the guest cache
   •  Uncached access avoids multiple copies of data in memory
   •  Avoid read/modify/write module file system block size
   •  Bypasses many file-system level locks



    Enabling Direct I/O for Oracle and MySQL on Linux

 # vi init.ora                              # vi my.cnf
 filesystemio_options=“setall”              innodb_flush_method to O_DIRECT

 Check:                                     Check:

 # iostat 3                                 # iostat 3
 (Check for I/O size matching               (Check for I/O size matching
 the DB block size…)                        the DB block size…)
Asynchronous I/O

    An API for single-threaded process to launch multiple
    outstanding I/Os
   •  Multi-threaded programs could just just multiple threads
   •  Oracle databases uses this extensively
   •  See aio_read(), aio_write() etc...

    Enabling AIO on Linux


 # rpm -Uvh aio.rpm
 # vi init.ora
 filesystemio_options=“setall”

 Check:

 # ps –aef |grep dbwr
 # strace –p <pid>
 io_submit()…                  <- Check for io_submit in syscall trace
Picking the size of each VM


  vCPUs from one VM
  stay on one socket*         Socket 0   Socket 1   VM Size   Options

  With two quad-core
  sockets, there are only                                       2
  two positions for a 4-
  way VM
  1- and 2-way VMs can
  be arranged many ways
  on quad core socket                                          12
  Newer ESX schedulers
  more efficiency use
  fewer options
  •  Relaxed co-scheduling                                      8
Use Large Pages

    Guest-OS Level Option to use Large MMU Pages
   •  Maps the large SGA region with fewer TLB entries
   •  Reduces MMU overheads
   •  ESX 3.5 Uniquely Supports Large Pages!

    Enabling Large Pages on Linux

 # vi /etc/sysctl.conf
 (add the following lines:)

 vm/nr_hugepages=2048
 vm/hugetlb_shm_group=55

 # cat /proc/vminfo |grep Huge
 HugePages_Total: 1024
 HugePages_Free:    940
 Hugepagesize:     2048 kB
Large Pages

 Increases TLB memory                         Performance Gains
 coverage
 •  Removes TLB misses, improves
  efficiency                            12%


 Improves performance of                10%
 applications that are sensitive
 to TLB miss costs                      8%

 Configure OS and application to
                                        6%
 leverage large pages
 •  LP will not be enabled by default   4%


                                        2%


                                        0%
                                                              Gain (%)
Linux Versions

 Some older Linux versions have a 1Khz timer to optimize desktop-
 style applications
 •  There is no reason to use such a high timer rate on server-class applications
 •  The timer rate on 4vcpu Linux guests is over 70,000 per second!
 Use RHEL >5.1 or latest tickless timer kernels
 •  Install 2.6.18-53.1.4 kernel or later
 •  Put divider=10 on the end of the kernel line in grub.conf and reboot, or default on
   tickless kernel
 •  All the RHEL clones (CentOS, Oracle EL, etc.) work the same way
Monitor and Control Service Levels with AppSpeed

 Policies (SLA)
                    End-user
  99.9% Uptime                             Infrastructure
  100 ms latency                               App
  .01% error rate
                                     Web

                                                            DB



Automatically map services to                  App
infrastructure

Monitor service levels and
identify bottlenecks

Size infrastructure dynamically to
meet SLA cost-effectively
Performance Whitepapers
•  VMware vCenter Update Manager Performance and Best Practices
•  Microsoft Exchange Server 2007 Performance on VMware vSphere
•  Virtualizing Performance-Critical Database Applications in VMware vSphere
•  Performance Evaluation of Intel EPT Hardware Assist
•  SAP Performance on VMware vSphere
•  A Comparison of Storage Protocol Performance
•  Microsoft SQLServer Performance
•  Fault-Tolerance Performance
•  Overview of Memory Management in VMware vSphere
•  Scheduler Improvements in VMware vSphere
•  Comparison of Storage Protocols with Microsoft Exchange 2007
•  Networking Performance and Scalability in VMware vSphere
•  Performance Analysis of VMware VMFS Filesystem
•  Performance Impact of PVSCSI
•  vSphere Performance Best Practices
For more info:                  www.vmware.com/oracle

 Richard McDougall
  Chief Performance Architect




                                               © 2009 VMware Inc. All rights reserved

Más contenido relacionado

Destacado

VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6Vepsun Technologies
 
VMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A TutorialVMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A TutorialRichard McDougall
 
Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013Richard McDougall
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldRichard McDougall
 
Architecting Virtualized Infrastructure for Big Data
Architecting Virtualized Infrastructure for Big DataArchitecting Virtualized Infrastructure for Big Data
Architecting Virtualized Infrastructure for Big DataRichard McDougall
 
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on Demand
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on DemandApachecon Euro 2012: Elastic, Multi-tenant Hadoop on Demand
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on DemandRichard McDougall
 
Open solaris customer presentation
Open solaris customer presentationOpen solaris customer presentation
Open solaris customer presentationxKinAnx
 
Presentation ten tips on earning and using your oracle certification
Presentation    ten tips on earning and using your oracle certificationPresentation    ten tips on earning and using your oracle certification
Presentation ten tips on earning and using your oracle certificationxKinAnx
 
My sql roadmap 2008 2009
My sql roadmap 2008 2009My sql roadmap 2008 2009
My sql roadmap 2008 2009xKinAnx
 
Solaris 8 containers and solaris 9 containers customer presentation
Solaris 8 containers and solaris 9 containers customer presentationSolaris 8 containers and solaris 9 containers customer presentation
Solaris 8 containers and solaris 9 containers customer presentationxKinAnx
 
Big Data/Hadoop Infrastructure Considerations
Big Data/Hadoop Infrastructure ConsiderationsBig Data/Hadoop Infrastructure Considerations
Big Data/Hadoop Infrastructure ConsiderationsRichard McDougall
 
Virtualization Primer for Java Developers
Virtualization Primer for Java DevelopersVirtualization Primer for Java Developers
Virtualization Primer for Java DevelopersRichard McDougall
 
Solaris Internals Preso circa 2009
Solaris Internals Preso circa 2009Solaris Internals Preso circa 2009
Solaris Internals Preso circa 2009Richard McDougall
 
Oracle Virtualization "OVM"
Oracle Virtualization "OVM"Oracle Virtualization "OVM"
Oracle Virtualization "OVM"markgatkinson
 
Building Big Data Applications
Building Big Data ApplicationsBuilding Big Data Applications
Building Big Data ApplicationsRichard McDougall
 

Destacado (17)

VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6VMware Advance Troubleshooting Workshop - Day 6
VMware Advance Troubleshooting Workshop - Day 6
 
VMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A TutorialVMware Performance for Gurus - A Tutorial
VMware Performance for Gurus - A Tutorial
 
Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013Is your cloud ready for Big Data? Strata NY 2013
Is your cloud ready for Big Data? Strata NY 2013
 
Inside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworldInside the Hadoop Machine @ VMworld
Inside the Hadoop Machine @ VMworld
 
Architecting Virtualized Infrastructure for Big Data
Architecting Virtualized Infrastructure for Big DataArchitecting Virtualized Infrastructure for Big Data
Architecting Virtualized Infrastructure for Big Data
 
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on Demand
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on DemandApachecon Euro 2012: Elastic, Multi-tenant Hadoop on Demand
Apachecon Euro 2012: Elastic, Multi-tenant Hadoop on Demand
 
Hadoop on VMware
Hadoop on VMwareHadoop on VMware
Hadoop on VMware
 
Open solaris customer presentation
Open solaris customer presentationOpen solaris customer presentation
Open solaris customer presentation
 
Presentation ten tips on earning and using your oracle certification
Presentation    ten tips on earning and using your oracle certificationPresentation    ten tips on earning and using your oracle certification
Presentation ten tips on earning and using your oracle certification
 
My sql roadmap 2008 2009
My sql roadmap 2008 2009My sql roadmap 2008 2009
My sql roadmap 2008 2009
 
Hadoop on Virtual Machines
Hadoop on Virtual MachinesHadoop on Virtual Machines
Hadoop on Virtual Machines
 
Solaris 8 containers and solaris 9 containers customer presentation
Solaris 8 containers and solaris 9 containers customer presentationSolaris 8 containers and solaris 9 containers customer presentation
Solaris 8 containers and solaris 9 containers customer presentation
 
Big Data/Hadoop Infrastructure Considerations
Big Data/Hadoop Infrastructure ConsiderationsBig Data/Hadoop Infrastructure Considerations
Big Data/Hadoop Infrastructure Considerations
 
Virtualization Primer for Java Developers
Virtualization Primer for Java DevelopersVirtualization Primer for Java Developers
Virtualization Primer for Java Developers
 
Solaris Internals Preso circa 2009
Solaris Internals Preso circa 2009Solaris Internals Preso circa 2009
Solaris Internals Preso circa 2009
 
Oracle Virtualization "OVM"
Oracle Virtualization "OVM"Oracle Virtualization "OVM"
Oracle Virtualization "OVM"
 
Building Big Data Applications
Building Big Data ApplicationsBuilding Big Data Applications
Building Big Data Applications
 

Similar a Virtualizing Oracle Databases with VMware

Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009
Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009
Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009Hiroshi Ono
 
IBM System Networking Easy Connect Mode
IBM System Networking Easy Connect ModeIBM System Networking Easy Connect Mode
IBM System Networking Easy Connect ModeIBM System Networking
 
Java mission control and java flight recorder
Java mission control and java flight recorderJava mission control and java flight recorder
Java mission control and java flight recorderWolfgang Weigend
 
Low Power Design and Verification
Low Power Design and VerificationLow Power Design and Verification
Low Power Design and VerificationDVClub
 
Low power design-ver_26_mar08
Low power design-ver_26_mar08Low power design-ver_26_mar08
Low power design-ver_26_mar08Obsidian Software
 
Linux Power Management Slideshare
Linux Power Management SlideshareLinux Power Management Slideshare
Linux Power Management SlidesharePatrick Bellasi
 
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013Scott Carlson
 
Message Queues : A Primer - International PHP Conference Fall 2012
Message Queues : A Primer - International PHP Conference Fall 2012Message Queues : A Primer - International PHP Conference Fall 2012
Message Queues : A Primer - International PHP Conference Fall 2012Mike Willbanks
 
Presentation: Optimal Power Management for Server Farm to Support Green Compu...
Presentation: Optimal Power Management for Server Farm to Support Green Compu...Presentation: Optimal Power Management for Server Farm to Support Green Compu...
Presentation: Optimal Power Management for Server Farm to Support Green Compu...Sivadon Chaisiri
 
Mixed signal verification challenges - slides
Mixed signal verification challenges - slidesMixed signal verification challenges - slides
Mixed signal verification challenges - slidesRégis SANTONJA
 

Similar a Virtualizing Oracle Databases with VMware (14)

XS Oracle 2009 Just Run It
XS Oracle 2009 Just Run ItXS Oracle 2009 Just Run It
XS Oracle 2009 Just Run It
 
Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009
Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009
Solving_the_C20K_problem_PHP_Performance_and_Scalability-phpquebec_2009
 
IBM System Networking Easy Connect Mode
IBM System Networking Easy Connect ModeIBM System Networking Easy Connect Mode
IBM System Networking Easy Connect Mode
 
MySQL Replication
MySQL ReplicationMySQL Replication
MySQL Replication
 
Java mission control and java flight recorder
Java mission control and java flight recorderJava mission control and java flight recorder
Java mission control and java flight recorder
 
Low Power Design and Verification
Low Power Design and VerificationLow Power Design and Verification
Low Power Design and Verification
 
Low power design-ver_26_mar08
Low power design-ver_26_mar08Low power design-ver_26_mar08
Low power design-ver_26_mar08
 
Shultz dallas q108
Shultz dallas q108Shultz dallas q108
Shultz dallas q108
 
Schulz dallas q1_2008
Schulz dallas q1_2008Schulz dallas q1_2008
Schulz dallas q1_2008
 
Linux Power Management Slideshare
Linux Power Management SlideshareLinux Power Management Slideshare
Linux Power Management Slideshare
 
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013
High Availability OpenStack at PayPal - OpenStack Summit Fall Hong Kong 2013
 
Message Queues : A Primer - International PHP Conference Fall 2012
Message Queues : A Primer - International PHP Conference Fall 2012Message Queues : A Primer - International PHP Conference Fall 2012
Message Queues : A Primer - International PHP Conference Fall 2012
 
Presentation: Optimal Power Management for Server Farm to Support Green Compu...
Presentation: Optimal Power Management for Server Farm to Support Green Compu...Presentation: Optimal Power Management for Server Farm to Support Green Compu...
Presentation: Optimal Power Management for Server Farm to Support Green Compu...
 
Mixed signal verification challenges - slides
Mixed signal verification challenges - slidesMixed signal verification challenges - slides
Mixed signal verification challenges - slides
 

Último

New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 

Último (20)

New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 

Virtualizing Oracle Databases with VMware

  • 1. Virtualizing Oracle Databases with VMware  Richard McDougall   Chief Performance Architect © 2009 VMware Inc. All rights reserved
  • 2. Agenda  VMware Platform Introduction  Why Virtualize Databases?  Virtualization Technical Primer  Performance Studies and Proof Points  Deploying Databases in Virtual Environments •  Consolidation and Sizing •   VMware Platform Introduction   Why   Virtualization Technical Primer  
  • 4. VMotion Technology VMotion Technology moves running virtual machines from one host to another while maintaining continuous service availability - Enables Resource Pools - Enables High Availability
  • 5. Resource Controls  Reservation •  Minimum service level guarantee (in MHz) Total Mhz •  Even when system is overcommitted •  Needs to pass admission control Limit  Shares •  CPU entitlement is directly proportional to VM's Shares shares and depends on the total number of apply here shares issued •  Abstract number, only ratio matters Reservation  Limit •  Absolute upper bound on CPU entitlement (in MHz) 0 Mhz •  Even when system is not overcommitted
  • 6. Resource Control Example Add 2nd VM Add 3rd VM 100% ► with same 50% ►with same number number 33.3% of shares of shares ▼ Set 3rd VM’s limit to 25% of total capacity FAILED Add 4th VM Set 1st VM’s with reservation reservation to ADMISSION set to 75% of ◄ ◄ 50% of total 37.5% CONTROL total capacity 50% capacity
  • 7. Resource Pools  Motivation •  Allocate aggregate resources for sets of VMs •  Isolation between pools, sharing within pools •  Flexible hierarchical organization Admin •  Access control and delegation  What is a resource pool? •  Abstract object with permissions L: not set Pool A Pool B L: 2000Mhz R: 600Mhz R: not set •  Reservation, limit, and shares S: 60 shares S: 40 shares •  Parent pool, child pools and VMs •  Can be used on a stand-alone host or in a cluster (group of hosts) VM1 VM2 VM3 VM4 60% 40%
  • 8. Example migration scenario 4_4_0_0 with DRS 1 2 HP ProLiant vCenter 1 2 HP ProLiant OVER DL380G6 OVER DL380G6 1 2 TEMP 1 5 1 2 TEMP 1 5 POWER POWER POWER POWER SUPPLY SUPPLY INTER PL A Y ER SUPPLY SUPPLY INTER PL A Y ER LOCK LOCK POWER CAP POWER CAP DIMMS DIMMS 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 2 6 2 6 2D 4B 6H 8F 8F 6H 4B 2D 2D 4B 6H 8F 8F 6H 4B 2D ONLINE ONLINE 1 SPARE 2 1 SPARE 2 PROC PROC PROC PROC MIRROR MIRROR FANS FANS 3 7 3 7 1 2 3 4 5 6 1 2 3 4 5 6 4 8 4 8 Imbalanced Balanced Cluster Cluster POWER SUPPLY 1 POWER CAP POWER SUPPLY 1 2 2 OVER TEMP INTER LOCK DIMMS 1 5 PL A Y ER HP ProLiant DL380G6 Heavy Load POWER SUPPLY POWER CAP 1 1A 3G 5E 7C 9i POWER SUPPLY 1 2 2 OVER TEMP INTER LOCK DIMMS 9i 7C 5E 3G 1A 1 5 PL A Y ER HP ProLiant DL380G6 1A 3G 5E 7C 9i 9i 7C 5E 3G 1A 2 6 2 6 2D 4B 6H 8F 8F 6H 4B 2D ONLINE 1 SPARE 2 2D 4B 6H 8F 8F 6H 4B 2D ONLINE PROC PROC 1 2 MIRROR SPARE FANS PROC PROC 3 7 MIRROR 1 2 3 4 5 6 FANS 3 7 1 2 3 4 5 6 4 8 4 8 Lighter Load
  • 9. DRS Scalability – Transactions per minute (Higher the better) Transactions per minute - DRS vs. No DRS No DRS DRS Already balanced So, fewer gains Higher gains (> 40%) with more imbalance 140000 130000 120000 Transaction per minute 110000 100000 90000 80000 70000 60000 50000 40000 2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0 Run Scenario
  • 10. DRS Scalability – Application Response Time (Lower the better) Transaction Response Time - DRS vs. No DRS No DRS DRS 70.00 60.00 Transaction Response time (ms) 50.00 40.00 30.00 20.00 10.00 0.00 2_2_2_2 3_2_2_1 3_3_1_1 3_3_2_0 4_2_1_1 4_2_2_0 4_3_1_0 4_4_0_0 5_3_0_0 Run Scenario
  • 11. VMware HA VMs Reboot App App App App HA HA OS OS OS OS VMware ESX VMware ESX
  • 12. VMware Fault Tolerance No Reboot – Seamless Cutover App App FT OS OS VMware ESX VMware ESX
  • 13. vApp: The application of the cloud   An uplifting of a virtualized workload •  VM = Virtualized Hardware Box •  App = Virtualized Software Solution •  Takes the benefits of virtualization: encapsulation, isolation and mobility higher up the stack   Properties: Policies •  Comprised of one or more VMs (may be multi-tier applications) 1.  Product: eCommerce 2.  Topology •  Encapsulates requirements on the 3.  Resources Req: CPU, Mem, Disk, Bandwidth deployment environment 4.  Only port 80 is used •  Distributed as an OVF package 5.  DR RPO: 1 hour 6.  VRM: Encrypt w/ SHA-1   Built by: 7.  Decommission in 2 month Websphere •  ISVs / Virtual Appliance Vendors Tomcat Exchange •  IT administrators •  SI/VARs SAP
  • 14. The Progression of Virtualization to Cloud VMware ESX® 2009 VMware Server Virtualization Workstation Virtualization 2003 VMware vSphere™ 2001 Complete Virtualization VMware Infrastructure Platform From Desktop Virtual through the Datacenter… 1998 Resource to the Cloud Pools 14
  • 15. Datacenter of the Future – private cloud •  On-demand capacity •  Pooling, load balancing of server, storage, network •  Built-in availability, security and scalability Resource Pools A Compute P factory vSphere vSphere vSphere vSphere I
  • 16. vSphere 4.0 – The Most Complete Virtualization Platform •  Firewall •  Clustering •  Anti-virus Dynamic Resource •  Data Protection •  Intrusion Prevention Sizing •  Fault Tolerance •  Intrusion Detection Application Services Availability Security Scalability vSphere 4.0 vCompute vStorage vNetwork Infrastructure Services •  Hardware Assist •  Storage Management •  Enhanced Live Network & Replication Migration Management Compatibility •  Storage Virtual Appliances
  • 17. Business-Critical Application Momentum % of customers running apps in production on VMware 56% 53% 50% 36% 41% 34% 24% 27% MS MS MS Oracle Oracle IBM IBM SAP Exchange SQL SharePoint Middleware DB WebSphere DB2 Source: VMware customer survey, September 2008, sample size 1038 Data: Within subset of VMware customers running a specific app, % that have at least one instance of that app in production in a VM In a recent Gartner poll, 73% of customers claimed to use x86 virtualization for mission critical applications in production Source: Gartner IOM Conference (June 2008) “Linux and Windows Server Virtualization Is Picking Up Steam” (ID Number: G00161702)
  • 18. Agenda  VMware Platform Introduction  Why Virtualize Databases?  Virtualization Technical Primer  Performance Studies and Proof Points  Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  • 19. Provision DB On-Demand Pre-Configured vApps Database Database " Standardize on SQL SQL Enterprise Ed. optimal app & OS OS 4 vCPU OS 4 vCPU 4 GB 4 GB configurations " Minimize configuration drift and errors Accelerate Faster service " Support multi-tier Apps dev & test availability Lab Production Provision On Demand " Accelerate app development " Faster service availability
  • 20. Databases: Why Use VMs Rather than DB Virtualization?  Virtualization at hypervisor level provides the best abstraction •  Each DBA has their own hardened, isolated, managed sandbox  Strong Isolation •  Security •  Performance/Resources •  Configuration •  Fault Isolation  Scalable Performance •  Low-overhead virtual Database performance •  Efficiently Stack Databases per-host
  • 21. Agenda  VMware Platform Introduction  Why Virtualize Databases?  Virtualization Technical Primer  Performance Studies and Proof Points  Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  • 22. VMware ESX Architecture CPU is controlled by scheduler and virtualized File System by monitor TCP/IP Guest Guest Monitor supports: ! BT (Binary Translation) ! HW (Hardware assist) Monitor Monitor (BT, HW, PV) ! PV (Paravirtualization) Virtual NIC Virtual SCSI Memory is allocated by the Memory VMkernel Scheduler Allocator Virtual Switch File System VMkernel and virtualized by the monitor NIC Drivers I/O Drivers Network and I/O devices Physical are emulated and proxied Hardware though native device drivers
  • 23. Agenda  VMware Platform Introduction  Why Virtualize Databases?  Virtualization Technical Primer  Performance Studies and Proof Points  Deploying Databases in Virtual Environments •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  • 24. Evolution of Performance for Large Apps on ESX 100% Mission Critical Apps ESX 2.x VI 3.0 VI 3.5 vSphere 4.0 Overhead:2-15% Overhead:30-60%Overhead:20-40% Overhead:10-30% VCPUs:8 VCPUs: 2 VCPUs:2 VCPUs:4 VM RAM:255GB VM RAM:3.6 GB VM RAM:16 GB VM RAM:64GB General Phys RAM:1 TB Population Phys RAM:64GB Phys RAM:64GB Phys RAM:256GB Of PCPUs:64 core Apps PCPUs:16 core PCPUs:16 core PCPUs:64 core IOPS:350,000 IOPS:<10,000 IOPS:10,000 IOPS:100,000 N/W:28 Gb/s N/W:380 Mb/s N/W:800 Mb/s N/W:9 Gb/s 64-bit OS Support Monitor Type: Gen-1 64-bit OS Support 320 VMs per host Binary TranslationHW Virtualization Gen-2 HW 512 vCPUs per host Monitor Type: Virtualization Monitor Type: EPT VT / SVM Monitor Type: NPT Ability to satisfy Performance Demands
  • 25. Can I virtualize CPU Intensive Applications? VMware ESX 3.x compared to Native SPECcpu results covered by O.Agesen and K.Adams Paper Websphere results published jointly by IBM/VMware SPECjbb results from recent internal measurements Most CPU intensive applications have very low overhead
  • 26. Debunking the myth: High Throughput, Low Overhead I/O  Maximum reported storage: 365K IOPS • 100K on VI3  Maximum reported network: 16 Gb/s • Measured on VI3
  • 27. Can I Virtualize High Networking I/O Applications? Overall response time is lower when CPU utilization is less than 100% due to multi-core offload
  • 28. Enterprise Workload Demands vs. Capabilities Workload Requires vSphere 4 Oracle 11g 8vcpus for 95% of DBs 8vcpus per VM 64GB for 95% of DBs 256GB per VM 60k IOPS max for OLTP @ 8vcpus 120k IOPS per VM 77Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VM SQLserver 8vcpus for 95% of DBs 8vcpus per VM 64GB @ 8vcpus 256GB per VM 25kIOPS max for OLTP @ 8vcpus 120k IOPS per VM 115Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VM SAP SD 8vcpus for 90% of SAP Installs 8vcpus per VM 24GB @ 8vcpus 256GB per VM 1k IOPS @ 8vcpus 120k IOPS per VM 115Mbits/sec for OLTP @ 8vcpus 9900Mbits/sec per VM Exchange 4cpus per VM, Multiple VMs 8vcpus per VM 16GB @ 4vcpus 256GB per VM 1000 IOPS for 2000 users 120k IOPS per VM 8Mbits/sec for 2000 users 9900Mbits/sec per VM Apache SPECweb 2-4cpus per VM, Multiple VMs 8vcpus per VM 8GB @ 4vcpus 256GB per VM 100IOPS for 2000 users 120k IOPS per VM 3Gbits/sec for 2000 users 9900Mbits/sec per VM
  • 29. Measuring the Performance of DB Virtualization Throughput Delivered Minimal Overheads
  • 30. How large is your database instance? (one VM shown)
  • 31. IO In Action: Oracle/TPC-C* "   ESX achieves 85% of native performance with Na1ve# VM# 58000 IOPS an industry standard 8# OLTP workload on an 8- Scaling#Ra1o# vCPU VM 6# "   1.9x increase in throughput with each 4# doubling of vCPUs 2# 0# 1# 2# 4# 8# v/p#CPUs#
  • 32. Eight vCPU Oracle System Characteristics Metric 8 vcpu VM Business transactions per minute 250,000 Disk IOPS 60,000 Disk Bandwidth 258 MB/s Network Packets/sec 27,000 Network Throughput 77 Mb/s * Our benchmark was a fair-use implementation of the TPC-C business model; our results are not TPC-C compliant results, and not comparable to official TPC-C results
  • 33. Oracle/TPC-C* Experimental Details  Host was an 8 CPU system with an Xeon 5500  OLTP Benchmark: fair-use implementation of TPC-C workload  Software stack includes: RHEL5.1, Oracle 11g R1, internal build of ESX (ESX 4.0 RC)  Were there many Tweaks in getting this result? Not really… •  ESX development build with these features !  Async I/O, pvscsi driver, virtual Interrupt coalescing, topology-aware scheduling !  EPT: H/W MMU enabled processor •  The only ESX “tunable” applied: static vmxnet TX coalescing !  3% improvement in performance
  • 34. VMware vSphere enables you to use all those cores… VMWare ESX Scaling: Keeping up with core counts Virtualization provides a means to exploit the hardware’s increasing parallelism Most applications don’t scale beyond 4/8 way
  • 35. “Bonus” Memory During Consolidation: Sharing! VM 1 VM 2 VM 3  Content-based •  Hint (hash of page content) generated for 4K pages •  Hint is used for a match Hyper •  If matched, perform bit by bit visor comparison  COW (Copy-on-Write) VM 1 VM 2 VM 3 •  Shared pages are marked read- only •  Write to the page breaks sharing Hyper visor
  • 36. Multi-VM Performance: DVD-Rental Workload !  Simulate a large multi-tier application with RDBMS •  Simulates DVD store transactions •  Java client tier •  Microsoft SQLServer and Oracle Database SQLserver: Oracle: Dell PE2950 Sun 16-core x4600 M2 Quad Core Xeon VMware ESX 3.5 EMC CLARiiON CX-340 2 x Intel X5450 Oracle 10G R2 32GB RAM RHEL4, Update 4, 64-bit
  • 37. Consolidating Multiple Oracle VMs Aggregate TPM vs. Number of VMs Scaling to 16 Cores, 45000 100 40000 90 256GB RAM! 80 35000 70 30000 60 Aggregate TPM 25000 50 CPU Utilization 20000 40 15000 30 10000 20 5000 10 0 0 1 2 3 4 5 6 7 # of VMs Average of 1GB Memory Saved per instanced from page sharing
  • 38. Oracle Performance (Response time) Average response time vs. Number of VMs 0.20 100 Average response time 0.18 90 CPU Utilization 0.16 80 Aggregate Response Time (secs) 0.14 70 0.12 60 0.10 50 0.08 40 0.06 30 0.04 20 0.02 10 0.00 0 1 2 3 4 5 6 7 # of VMs !  Oracle scales very well on ESX in consolidation scenarios !  Efficient, guaranteed resource allocation to individual Virtual Machine
  • 39. Agenda  VMware Platform Introduction  Why Virtualize Databases?  Virtualization Technical Primer  Performance Studies and Proof Points  Deploying Databases in Virtual Environments •  Consolidation and Sizing •  Picking a Hardware Platform •  Configuring Storage •  Configuring the Virtual Machine •  OS Choices and Tuning •  Database Configuration •  Performance Monitoring
  • 40. General Best Practices for Virtualizing DBs  Characterize DBs into three rough groups •  Green DBs – typically 70% ! Ideal candidate for virtualization: -  Well tuned and modest CPU consumption -  Less than 1000 IOPS, 4 cores •  Yellow DBs – typically 25% ! Likely candidate for virtualization -  May need some SQL tuning and monitoring to understand CPU and I/O requirements -  4-8 cores, >1000 IOPS -  Storage I/O planning and configuration required •  Red DBs – typically 5% ! Unlikely candidates until larger VMs available ! Consumes more than 8 physical cores ! Not a lot of SQL tuning to be done
  • 41. Consolidation and Sizing CPU Utilization Distribution 100000 10000 Consolidation targets are often Number of Systems <30% Utilized 1000 " Windows average utilization: 5-8% 100 " Linux/Unix average: 10-35% 10 1 0 20 40 60 80 100 % CPU Utilization
  • 42. Sizing and Requirements  Virtual Machine sizing is different to Physical •  Don’t just take the #cpus in the physical system as the vcpu requirement •  Many Physical systems are sized for the peak utilization for with ample headroom for future growth •  As a result, utilization is often very low in physical systems •  With virtual machines, it’s not necessary to build headroom •  For example, many databases running on 4-cpu systems can easily run in a 2-vcpu guest  Moving of older RISC/SPARC machines to virtual x86 •  Even that large older generation SPARC may be a good candidate… •  48 x 1.2Ghz SPARC cores = 1 x 8 core Nehalem VM •  Since most large SPARC machines are consolidated already, it’s likely that your larger databases can run inside a VM
  • 43. Picking Hardware: Recent Hardware has Lower Overhead Intel Architecture VMEXIT Latencies 1400 1200 1000 Latency (cycles) 800 600 400 200 0 Prescott Cedar Mill Merom Penryn Nehalem HW virtualization support improving from CPU generation to generation
  • 44. Use Intel Nehalem or AMD Barcelona, or later… AMD RVI Speedup  Hardware memory 1.6 management units (MMU) improve 1.4 efficiency 1.2 •  AMD RVI currently available 1 •  Dramatic gains can be seen 0.8  But some workloads 0.6 see little or no value 0.4 •  And a small few actually slow down 0.2 0 SQL Server Citrix XenApp Apache Compile
  • 45. Databases: Top Ten Tuning Recommendations 1.  Optimize Storage Layout, # of Disk Spindles 2.  Use 64-bit Database 3.  Add enough memory to cache DB, reduce I/O 4.  Optimize Storage Layout, # of Disk Spindles 5.  Use Direct-IO high performance un-cached path in the Guest Operating System 6.  Use Asynchronous I/O to reduce system calls 7.  Optimize Storage Layout, # of Disk Spindles 8.  Use Large MMU Pages 9.  Use the latest H/W – with AMD RVI or Intel EPT 10. Optimize Storage Layout, # of Disk Spindles
  • 46. Databases: Workload Considerations  OLTP  DSS  Short Transactions Long Transactions  Limited number of standardized queries Complex queries  Small amounts of data accessed Large amounts of data accessed Combines data from different  Uses data from only one source sources  I/O Profile  I/O Profile •  Small Synchronous reads/writes (2k->8k) •  Large, Sequential I/Os (up to 1MB) •  Heavy latency-sensitive log I/O •  Extreme Bandwidth Required  Memory and I/O intensive •  Heavy ready traffic against data volumes •  Little log traffic  CPU, Memory and I/O intensive  Indexing enables higher performance
  • 47. Databases: Storage Configuration  Storage considerations •  VMFS or RDM •  Fibre Channel, NFS or iSCSI •  Partition Alignment •  Multiple storage paths  OS/App, Data, Transaction Log and TempDB on separate physical spindles  RAID 10 or RAID5 for Data, RAID 1 for logs  Queue depth and Controller Cache Settings  TempDB optimization
  • 48. Disk Fundamentals  Databases are mostly random I/O access patterns  Accesses to disk are dominated by seek/rotate •  10k RPM Disks: 150 IOPS max, ~80 IOPS Nominal •  15k RPM Disks: 250 IOPS max, ~120 IOPS Nominal  Database Storage Performance is controlled by two primary factors •  Size and configuration of cache(s) •  Number of physical disks at the back-end
  • 49. Disk Performance  Higher sequential performance (bandwidth) on the outer tracks
  • 50. Databases: Storage Hierarchy " In a recent study, we scaled up to Database Cache 320,000 IOPS to an EMC array from a single ESX server. Guest OS Cache " 8K Read/Write Mix " Cache as much as possible in /dev/hda caches Controller Cache " Q: What’s the impact on the number of disks if we improve cache hit rates from 90% to 95%? " 10 in 100 => 5 in 100… " #of disks reduced by 2x!
  • 51. Storage – VMFS or RDM Guest Guest OS OS Guest OS /dev/hda /dev/hda /dev/hda VMFS database1.vmdk database2.vmdk FC LUN FC or iSCSI  RAW  VMFS LUN  RAW provides direct access to  Leverage templates and quick provisioning a LUN from within the VM  Fewer LUNs means you don’t have to  Allows portability between physical watch Heap and virtual  Scales better with Consolidated  RAW means more LUNs Backup •  More provisioning time  Preferred Method  Advanced features still work
  • 52. Best Practices: VMFS or RDM Performance is similar
  • 53. Databases: Typical I/O Architecture Database Cache 2k,8k,16k x n 2k, 8k, 16k x n 512->1MB DB Log DB Writes Writes Reads File System FS Cache
  • 54. Know your I/O: Use a top-down Latency analysis technique Application File A = Application Latency Guest System A R = Perfmon I/O Drivers Windows R Physical Disk Device Queue “Disk Secs/transfer” S S = Windows Physical Disk Service Time Virtual SCSI G G = Guest Latency VMkernel File System K K = ESX Kernel D D = Device Latency
  • 55. Checking for Disk Bottlenecks  Disk latency issues are visible from Oracle stats •  Enable statspack •  Review top latency events Top 5 Timed Events % Total Event Waits Time (s) Ela Time --------------------------- ------------ ----------- ----------- db file sequential read 2,598 7,146 48.54 db file scattered read 25,519 3,246 22.04 library cache load lock 673 1,363 9.26 CPU time 2,154 934 7.83 log file parallel write 19,157 837 5.68
  • 56. Oracle File System Sync vs DIO
  • 58. Direct I/O   Guest-OS Level Option for Bypassing the guest cache •  Uncached access avoids multiple copies of data in memory •  Avoid read/modify/write module file system block size •  Bypasses many file-system level locks   Enabling Direct I/O for Oracle and MySQL on Linux # vi init.ora # vi my.cnf filesystemio_options=“setall” innodb_flush_method to O_DIRECT Check: Check: # iostat 3 # iostat 3 (Check for I/O size matching (Check for I/O size matching the DB block size…) the DB block size…)
  • 59. Asynchronous I/O   An API for single-threaded process to launch multiple outstanding I/Os •  Multi-threaded programs could just just multiple threads •  Oracle databases uses this extensively •  See aio_read(), aio_write() etc...   Enabling AIO on Linux # rpm -Uvh aio.rpm # vi init.ora filesystemio_options=“setall” Check: # ps –aef |grep dbwr # strace –p <pid> io_submit()… <- Check for io_submit in syscall trace
  • 60. Picking the size of each VM  vCPUs from one VM stay on one socket* Socket 0 Socket 1 VM Size Options  With two quad-core sockets, there are only 2 two positions for a 4- way VM  1- and 2-way VMs can be arranged many ways on quad core socket 12  Newer ESX schedulers more efficiency use fewer options •  Relaxed co-scheduling 8
  • 61. Use Large Pages   Guest-OS Level Option to use Large MMU Pages •  Maps the large SGA region with fewer TLB entries •  Reduces MMU overheads •  ESX 3.5 Uniquely Supports Large Pages!   Enabling Large Pages on Linux # vi /etc/sysctl.conf (add the following lines:) vm/nr_hugepages=2048 vm/hugetlb_shm_group=55 # cat /proc/vminfo |grep Huge HugePages_Total: 1024 HugePages_Free: 940 Hugepagesize: 2048 kB
  • 62. Large Pages  Increases TLB memory Performance Gains coverage •  Removes TLB misses, improves efficiency 12%  Improves performance of 10% applications that are sensitive to TLB miss costs 8%  Configure OS and application to 6% leverage large pages •  LP will not be enabled by default 4% 2% 0% Gain (%)
  • 63. Linux Versions  Some older Linux versions have a 1Khz timer to optimize desktop- style applications •  There is no reason to use such a high timer rate on server-class applications •  The timer rate on 4vcpu Linux guests is over 70,000 per second!  Use RHEL >5.1 or latest tickless timer kernels •  Install 2.6.18-53.1.4 kernel or later •  Put divider=10 on the end of the kernel line in grub.conf and reboot, or default on tickless kernel •  All the RHEL clones (CentOS, Oracle EL, etc.) work the same way
  • 64. Monitor and Control Service Levels with AppSpeed Policies (SLA) End-user 99.9% Uptime Infrastructure 100 ms latency App .01% error rate Web DB Automatically map services to App infrastructure Monitor service levels and identify bottlenecks Size infrastructure dynamically to meet SLA cost-effectively
  • 65. Performance Whitepapers •  VMware vCenter Update Manager Performance and Best Practices •  Microsoft Exchange Server 2007 Performance on VMware vSphere •  Virtualizing Performance-Critical Database Applications in VMware vSphere •  Performance Evaluation of Intel EPT Hardware Assist •  SAP Performance on VMware vSphere •  A Comparison of Storage Protocol Performance •  Microsoft SQLServer Performance •  Fault-Tolerance Performance •  Overview of Memory Management in VMware vSphere •  Scheduler Improvements in VMware vSphere •  Comparison of Storage Protocols with Microsoft Exchange 2007 •  Networking Performance and Scalability in VMware vSphere •  Performance Analysis of VMware VMFS Filesystem •  Performance Impact of PVSCSI •  vSphere Performance Best Practices
  • 66. For more info: www.vmware.com/oracle  Richard McDougall   Chief Performance Architect © 2009 VMware Inc. All rights reserved