SlideShare una empresa de Scribd logo
1 de 68
Descargar para leer sin conexión
Riverbed Certified Solutions Professional (RCSP)
     Study Guide
     Exam 199-01 for RiOS v5.0




June, 2009
Version 2.0
RCSP Study Guide




     COPYRIGHT © 2007-2009 Riverbed Technology, Inc.
     ALL RIGHTS RESERVED
     All content in this manual, including text, graphics, logos, icons, and images, is the exclusive property of Riverbed
     Technology, Inc. (“Riverbed”) and is protected by U.S. and international copyright laws. The compilation (meaning
     the collection, arrangement, and assembly) of all content in this manual is the exclusive property of Riverbed and is
     also protected by U.S. and international copyright laws. The content in this manual may be used as a resource. Any
     other use, including the reproduction, modification, distribution, transmission, republication, display, or
     performance, of the content in this manual is strictly prohibited.
     TRADEMARKS
     RIVERBED TECHNOLOGY, RIVERBED, STEELHEAD, RiOS, INTERCEPTOR, and the Riverbed logo are
     trademarks or registered trademarks of Riverbed. All other trademarks mentioned in this manual are the property of
     their respective owners. The trademarks and logos displayed in this manual may not be used without the prior
     written consent of Riverbed or their respective owners.
     PATENTS
     Portions, features and/or functionality of Riverbed's products are protected under Riverbed patents, as well as
     patents pending.
     DISCLAIMER
     THIS MANUAL IS PROVIDED BY RIVERBED ON AN "AS IS" BASIS. RIVERBED MAKES NO
     REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE
     INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED OR REFERENCED IN THE
     MANUAL. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, RIVERBED DISCLAIMS ALL
     WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES
     OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
     Although Riverbed has attempted to provide accurate information in this manual, Riverbed assumes no
     responsibility for the accuracy or completeness of the information. Riverbed may change the programs or products
     mentioned in this manual at any time without notice, but Riverbed makes no commitment to update the programs or
     products mentioned in this manual in any respect. Mention of non-Riverbed products or services is for information
     purposes only and constitutes neither an endorsement nor a recommendation.
     RIVERBED WILL NOT BE LIABLE UNDER ANY THEORY OF LAW, FOR ANY INDIRECT, INCIDENTAL,
     PUNITIVE OR CONSEQUENTIAL DAMAGES, INCLUDING, BUT NOT LIMITED TO, LOSS OF PROFITS,
     BUSINESS INTERRUPTION, LOSS OF INFORMATION OR DATA OR COSTS OF REPLACEMENT GOODS,
     ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL OR ANY RIVERBED PRODUCT OR
     RESULTING FROM USE OF OR RELIANCE ON THE INFORMATION PRESENT, EVEN IF RIVERBED
     MAY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
     CONFIDENTIAL INFORMATION
     The information in this manual is considered Confidential Information (as defined in the Reseller Agreement entered
     with Riverbed or in the Riverbed License Agreement currently available at www.riverbed.com/license, as
     applicable).



© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                                   1
RCSP Study Guide


    Table of Contents
    Preface ..................................................................................................................................................................................................................... 4
    Certification Overview ............................................................................................................................................................................................ 4
      Benefits of Certification......................................................................................................................................................................................... 4
      Exam Information.................................................................................................................................................................................................. 4
      Certification Checklist ........................................................................................................................................................................................... 5
      Recommended Resources for Study.................................................................................................................................................................... 5
    RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 7
    I. General Knowledge ............................................................................................................................................................................................. 7
       Optimizations Performed by RiOS........................................................................................................................................................................ 7
       TCP/IP ................................................................................................................................................................................................................ 12
       Common Ports.................................................................................................................................................................................................... 12
       RiOS Auto-discovery Process ............................................................................................................................................................................ 13
       Enhanced Auto-Discovery Process .................................................................................................................................................................... 14
       Connection Pooling............................................................................................................................................................................................. 15
       In-path Rules ...................................................................................................................................................................................................... 15
       Peering Rules ..................................................................................................................................................................................................... 16
       Steelhead Appliance Models and Capabilities ................................................................................................................................................... 17
    II. Deployment ....................................................................................................................................................................................................... 19
        In-path................................................................................................................................................................................................................. 20
        Out-of-Band (OOB) Splice .................................................................................................................................................................................. 21
        Virtual In-path ..................................................................................................................................................................................................... 23
        Policy-Based Routing (PBR)............................................................................................................................................................................... 23
        WCCP Deployments........................................................................................................................................................................................... 24
        Advanced WCCP Configuration ......................................................................................................................................................................... 27
        Server-Side Out-of-Path Deployments ............................................................................................................................................................... 28
        Asymmetric Route Detection .............................................................................................................................................................................. 30
        Connection Forwarding....................................................................................................................................................................................... 31
        Simplified Routing (SR) ...................................................................................................................................................................................... 32
        Data Store Synchronization ................................................................................................................................................................................ 33
        CIFS Prepopulation ............................................................................................................................................................................................ 33
        Authentication and Authorization........................................................................................................................................................................ 33
        SSL ..................................................................................................................................................................................................................... 34
        Central Management Console (CMC) ................................................................................................................................................................ 35
        Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client) ....................................................................................... 36
        Interceptor Appliance.......................................................................................................................................................................................... 37
    III. Features ............................................................................................................................................................................................................ 40
        Feature Licensing ............................................................................................................................................................................................... 40
        HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 40
        MX-TCP .............................................................................................................................................................................................................. 42
        Quality of Service................................................................................................................................................................................................ 42
        PFS (Proxy File Service) Deployments .............................................................................................................................................................. 45
        NetFlow............................................................................................................................................................................................................... 51
        IPSec .................................................................................................................................................................................................................. 53
        Operation on VLAN Tagged Links...................................................................................................................................................................... 53
    IV. Troubleshooting .............................................................................................................................................................................................. 54
       Common Deployment Issues.............................................................................................................................................................................. 54
       Reporting and Monitoring ................................................................................................................................................................................... 56
       Troubleshooting Best Practices.......................................................................................................................................................................... 59


2                                                                                                                                               © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

     V. Exam Questions ............................................................................................................................................................................................... 61
        Types of Questions............................................................................................................................................................................................. 61
        Sample Questions .............................................................................................................................................................................................. 61
     VI. Appendix .......................................................................................................................................................................................................... 65
        Acronyms and Abbreviations .............................................................................................................................................................................. 65




© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                                                                                                                                        3
RCSP Study Guide


    Preface
    This Riverbed Certification Study Guide is intended for anyone who wants to become certified in
    the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed
    Certified Solutions Professional (RCSP) program is designed to validate the skills required of
    technical professionals who work in the implementation of Riverbed products.
    This study guide provides a combination of theory and practical experience needed for a general
    understanding of the subject matter. It also provides sample questions that will help in the
    evaluation of personal progress and provide familiarity with the types of questions that will be
    encountered in the exam.
    This publication does not replace practical experience, nor is it designed to be a stand-alone
    guide for any subject. Instead, it is an effective tool that, when combined with education
    activities and experience, can be a very useful preparation guide for the exam.
    Certification Overview
    The Riverbed Certified Solutions Professional certificate is granted to individuals who
    demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP
    will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment
    & Management course in addition to having hands-on experience in performing deployment,
    troubleshooting, and maintenance of RiOS products in small, medium, and large organizations.
    While there are no set requirements prior to taking the exam, candidates who have taken a
    Riverbed authorized training class and have at least six months of hands-on experience with
    RiOS products have a significantly higher chance of receiving the certification. We would like to
    emphasize that solely taking the class will not adequately prepare you for the exam.
    To obtain the RCSP certification, you are required to pass a computerized exam available at any
    Pearson VUE testing center worldwide.
    Benefits of Certification
    1. Establishes your credibility as a knowledgeable and capable individual in regard to
       Riverbed's products and services.
    2. Helps improve your career advancement potential.
    3. Qualifies you for discounts and/or benefits for Riverbed sponsored events and training.
    4. Entitles you to use the RCSP certification logo on your business card.
    Exam Information
    Exam Specifications
    • Exam Number: 199-01
    • Exam Name: Riverbed Certified Solutions Professional
    • Version of RiOS: Up to RiOS version 5.0 for the Steelhead appliances and the Central
       Management Console, and Interceptor 2.0 and Steelhead Mobile 2.0
    • Number of Questions: 65
    • Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total)
    • Exam Provider: Pearson VUE
    • Exam Language: English only. Riverbed allows a 30-minute time extension for English
       exams taken in non-English speaking countries for students that request it. English speaking
       countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,

4                                                                 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


          South Africa, and the United States. A form will need to be completed by the candidate and
          submitted to Pearson VUE.
     •    Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or
          ADA accommodations; includes time extensions and/or a reader)
     •    Offered Locations: Worldwide (over 5000 test centers in 165 countries)
     •    Pre-requisites: None (although taking a Riverbed training class is highly recommended)
     •    Available to: Everyone (partners, customers, employees, etc)
     •    Passing Score: 700 out of 1000 (70%)
     •    Certification Expires: Every 2 years (must recertify every 2 years, no grace period)
     •    Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams.
     •    Cost: $150.00 (USD)
     •    Number of Attempts Allowed: Unlimited (though statistics are kept)
     Certification Checklist
     As the RCSP exam is geared towards individuals who have both the theoretical knowledge and
     hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial
     towards passing the exam. For individuals starting out with the process, we recommend the
     following steps to guide you along the way:
     1. Building Theoretical Knowledge
        The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting
        the RiOS product suite is to take a Riverbed authorized training class. To ensure the greatest
        possibility of passing the exam, it is recommended that you review the RCSP Study Guide
        and ensure your familiarity with all topics listed, prior to any examination attempts.
     2. Gaining Hands-on Experience
        While the theoretical knowledge will get you partway there, it is the hands-on knowledge
        that can get you over the top and enable you to pass the exam. Since all deployments are
        different, providing an exact amount of experience required is difficult. Generally, we
        recommend that resellers and partners perform at least five deployments in a variety of
        technologies prior to attempting the exam. For customers, and alternatively for resellers and
        partners, starting from the design and deployment phase and having at least six months of
        experience in a production environment would be beneficial.
     3. Taking the Exam
        The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing
        center. To register for any Riverbed Certification exam, please visit
        http://www.pearsonvue.com/riverbed.
     Recommended Resources for Study
     Riverbed Training Courses
     Information on Riverbed Training can be found at: http://www.riverbed.com/services/training/.
     •    Steelhead Appliance Deployment & Management
     •    Steelhead Appliance Operations & L1/L2 Troubleshooting
     •    Steelhead Mobile Installation & Configuration
     •    Central Management Console Configuration & Operations
     •    Interceptor Appliance Installation & Configuration


© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                5
RCSP Study Guide


    •   Steelhead Appliance Advanced Deployment & Troubleshooting
    Publications
    Recommended Reading (In No Particular Order)
    • This study guide
    •   Riverbed documentation
           o Steelhead Management Console User's Guide
           o Steelhead Command-Line Interface Reference Guide
           o Steelhead Appliance Deployment Guide
           o Steelhead Appliance Installation Guide
           o Bypass Card Installation Guide
           o Steelhead Mobile Controller User’s Guide
           o Steelhead Mobile Controller Installation Guide
           o Central Management Console User's Guide
           o Central Management Console Installation Guide
           o Interceptor Appliance User's Guide
           o Interceptor Appliance Installation Guide
    Other Reading (URLs Subject to Change)
    • http://www.ietf.org/rfc.html
           o RFC 793 (Original TCP RFC)
           o RFC 1323 TCP extensions for high performance
           o RFC 3649 (HighSpeed TCP for Large Congestion Windows)
           o RFC 3742 (Limited Slow-Start for TCP with Large Congestion Windows)
           o RFC 2474 (Differentiated Services Code Point)
    •   http://www.caida.org/tools/utilities/flowscan/arch.xml (NetFlow Protocol and Record
        Headers)
    •   http://ubiqx.org/cifs/Intro.html (CIFS)
    •   Microsoft Windows 2000 Server Administrator’s Companion by Charlie Russell and Sharon
        Crawford (Microsoft Press, 2000)
    •   Common Internet File System (CIFS) Technical Reference by the Storage Networking
        Industry Association (Storage Networking Industry Association, 2002)
    •   TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994)
    •   Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000)




6                                                              © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE
     The Riverbed Certified Solutions Professional exam, and therefore this study guide, covers the
     Riverbed products and technologies through RiOS version 5.0 only (Interceptor 2.0 and
     Steelhead Mobile 2.0 as well).
     I. General Knowledge
     Optimizations Performed by RiOS
     Optimization is the process of increasing data throughput and network performance over the
     WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it
     traverses the WAN. The optimization techniques RiOS utilizes are:
     •    Data Streamlining
     •    Transport Streamlining
     •    Application Streamlining
     •    Management Streamlining
     You should be familiar with the differences in these streamlining techniques for the RCSP test.
     This information can be found in the Steelhead Appliance Deployment Guide.
     Transaction Acceleration (TA)
     TA is composed of the following optimization mechanisms:
     • A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR)
     •    A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with
          references that represent arbitrary amounts of data, thus increasing the client-data per WAN
          TCP window
     •    A latency reduction and avoidance mechanism called Transaction Prediction
     SDR and TP can work independently or in conjunction with one another depending on the
     characteristics and workload of the data sent across the network. The results of the optimization
     vary, but often result in throughput improvements in the range of 10 to 100 times over
     unaccelerated links.
     Scalable Data Referencing (SDR)
     Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up
     TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead
     appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the
     peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP
     data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer
     Steelhead appliance uses this reference to reconstruct the original data in the TCP data stream.
     Data and references are maintained in persistent storage in the data store within each Steelhead
     appliance. Because SDR checks data chunks byte-by-byte there are no consistency issues even in
     the presence of replicated data.
     How Does SDR Work?
     When data is sent for the first time across a network (no commonality with any file ever sent
     before), all data and references are new and are sent to the Steelhead appliance on the other side
     of the network. This new data and the accompanying references are compressed using
     conventional algorithms so as to improve performance, even on the first transfer.

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                 7
RCSP Study Guide


    Over time, more data crosses the network (revisions of a document for example). Thereafter,
    when these new requests are sent across the network, the data is compared with references that
    already exist in the local data store. Any data that the Steelhead appliance determines already
    exists on the far side of the network are not sent—only the references are sent across the
    network.
    As files are copied, edited, renamed, and otherwise changed or moved (as well as web pages
    being viewed or email sent), the Steelhead appliance continually builds the data store to include
    more and more data and references. References can be shared by different files and by files in
    different applications if the underlying bits are common to both. Since SDR can operate on all
    TCP-based protocols, data commonality across protocols can be leveraged so long as the binary
    representation of that data does not change between the protocols. For example, when a file
    transferred via FTP is then transferred using WFS (Windows File System), the binary
    representation of the file is basically the same and thus references can be sent for that file.
    Lempel-Ziv (LZ) Compression
    SDR and compression are two different features and can be controlled separately. However, LZ
    compression is the primary form of data reduction for cold transfers.
    The Lempel-Ziv compression methods are among the most popular algorithms for lossless
    storage. Compression is turned on by default. In-path rules can be used to define which
    optimization features will be used for which set of traffic flowing through the Steelhead
    appliance.
    TCP Optimizations & Virtual Window Expansion (VWE)
    As Steelhead appliances are designed to optimize data transfers across wide area networks, they
    make extensive use of standards-based enhancements to the TCP protocol that may not be
    present in the TCP stack of many desktop and server operating systems. This includes improved
    transport capability for networks with high bandwidth delay products via the use of HighSpeed
    TCP, MX-TCP, or TCP Vegas for lower bandwidth links, partial acknowledgements, and other
    more obscure but throughput enhancing and latency reducing features.
    VWE allows Steelhead appliances to repack TCP payloads with references that represent
    arbitrary amounts of data. This is possible because Steelhead appliances operate at the
    Application Layer and terminate TCP, which gives them more flexibility in the way they
    optimize WAN traffic.
    Essentially, the TCP payload is increased from its normal window size to an arbitrarily large
    amount dependent on the compression ratio for the connection. Because of this increased
    payload, a given application that relies on TCP performance (for example, HTTP or FTP) takes
    fewer trips across the WAN to accomplish the same task. For example, consider a client-to-
    server connection that may have a 64KB TCP window. In the event that there is 256KB of data
    to transfer, it would take several TCP windows to accomplish this in a network with high
    latency. With SDR however, that 256KB of data can be potentially reduced to fit inside a single
    TCP window, removing the need to wait for acknowledgements to be sent prior to sending the
    next window, and thus speed the transfer.
    Transaction Prediction
    Application-level latency optimization is delivered through the Transaction Prediction module.
    Transaction Prediction leverages an intimate understanding of protocol semantics to reduce the
    chattiness that would normally occur over the WAN. By acting on foreknowledge of specific
    protocol request-response mechanisms, Steelhead appliances streamline the delivery of data that

8                                                                © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     would normally be delivered in small increments through large numbers of interactions between
     the client and server over the WAN. As transactions are executed between the client and server,
     the Steelhead appliance intercepts each transaction, compares it to the database of past
     transactions, and makes decisions about the probability of future events.
     Based on this model, if a Steelhead appliance determines there is a high likelihood of a future
     transaction occurring, it performs that transaction, rather than waiting for the response from the
     server to propagate back to the client and then back to the server. Dramatic performance
     improvements result from the time saved by not waiting for each serial transaction to arrive prior
     to making the next request. Instead, the transactions are pipelined one right after the other.
     Of course, transactions are executed by Steelhead appliances ahead of the client only when it is
     safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the
     underlying protocols to know when it is safe to do so. Fortunately, a wide range of common
     applications have very predictable behaviors and, consequently, Transaction Prediction can
     enhance WAN performance significantly. When combined with SDR, Transaction Prediction can
     improve WAN performance up to 100 times.
     Common Internet File System (CIFS) Optimization
     CIFS is a proposed standard protocol that lets programs make requests for files and services on
     remote computers over the Internet. CIFS uses the client/server programming model. A client
     program makes a request of a server program (usually in another computer) for access to a file or
     to pass a message to a program that runs in the server computer. The server takes the requested
     action and returns a response. CIFS is a public or open variation of the Server Message Block
     (SMB) protocol developed and used by Microsoft.
     In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you would only
     disable CIFS optimization to troubleshoot the system.
     Overlapping Opens
     Due to the way certain applications handle the opening of files, file locks are not properly
     granted to the application in such a way that would allow a Steelhead appliance to optimize
     access to that file using Transaction Prediction. To prevent any compromise to data integrity, the
     Steelhead appliance only optimizes data to which exclusive access is available (in other words,
     when locks are granted). When an opportunistic lock (oplock) is not available, the Steelhead
     appliance does not perform application-level latency optimizations but still performs SDR and
     compression on the data as well as TCP optimizations. The CIFS overlapping opens feature
     remedies this problem by having the server-side Steelhead handle file locking operations on
     behalf of the requesting application. If you disable this feature, the Steelhead appliance will still
     increase WAN performance, but not as effectively.
     Enabling this feature on applications that perform multiple opens of the same file to complete an
     operation will result in a performance improvement (for example, CAD applications).
     NOTE: For the Steelhead appliance to handle the locking properly, all transactions on the file
     must be optimized by that Steelhead appliance. Therefore, if a remote user opens a file which is
     optimized using the overlapping opens feature, and a second user opens the same file they might
     receive an error if the file fails to go through a Steelhead appliance or if it does not go through
     the Steelhead appliance (for example, certain applications that are sent over the LAN). If this
     occurs, you should disable overlapping opens optimizations for those applications.




© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                   9
RCSP Study Guide


     Messaging Application Programming Interface (MAPI) Optimization
     MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI
     optimization. Typically, you disable MAPI optimization to troubleshoot problems with the
     system. For example, if you are experiencing problems with Microsoft Outlook clients
     connecting to Exchange, you can disable MAPI latency acceleration (while continuing to
     optimize with SDR for MAPI).
     •   Read ahead on attachments
     •   Read ahead on large emails
     •   Write behind on attachments
     •   Write behind on large emails
     •   Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no
         Transaction Prediction)
     MAPI Prepopulation
     Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation
     the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting
     as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance
     will keep an open connection to the server-side Steelhead appliance and the server-side
     Steelhead appliance will keep the connection open to the server. This allows for data to be
     pushed through the data store before the user logs on to the server again. The default timer is set
     to 96 hours, after that, the connection will be reset.
     •   Optimized MAPI connections held open after client exit (acts like the client left the PC on);
         think of it as virtual client
     •   Keep reading mail until timeout
     •   No one is ever reconnected to the prepopulation session (including the original user)
     •   No need for more Client Access Licenses (CALs); no agents to deploy
     •   Can configure frequency check and timeout or to disable it
     •   Enables transmission during off times even in consolidated environments
     •   The feature can be disabled independently from other MAPI optimizations
     HTTP Optimization
     A typical web page is not a single file that is downloaded all at once. Instead, web pages are
     composed of dozens of separate objects—including .jpg and .gif images, JavaScript code,
     cascading style sheets, and more—each of which must be requested and retrieved separately, one
     after the other. Given the presence of latency, this behavior is highly detrimental to the
     performance of web-based applications over the WAN.
     The higher the latency, the longer it takes to fetch each individual object and, ultimately, to
     display the entire page.
     RiOS v5.0 and later optimizes web applications using:
     • Parsing and Prefetching of Dynamic Content
     •   URL Learning


10                                                                  © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     •    Removal of Unfetchable Objects
     •    HTTP Metadata Responses
     •    Persistent Connections
     More information can be found in the Steelhead Appliance Management Console User’s Guide.
     NFS Optimization
     You can configure Steelhead appliances to use Transaction Prediction to perform application-
     level latency optimization on NFS. Application-level latency optimization improves NFS
     performance over high latency WANs.
     NFS latency optimization optimizes TCP connections and is only supported for NFS v3.
     You can configure NFS settings globally for all servers and volumes, or you can configure NFS
     settings that are specific to particular servers or volumes. When you configure NFS settings for a
     server, the settings are applied to all volumes on that server unless you override settings for
     specific volumes.
     •    Read-ahead and read caching (checks freshness with modify date)
     •    Write-behind
     •    Metadata prefetching and caching
     •    Convert multiple requests into one larger request
     •    Special symbolic link handling
     Microsoft SQL Optimization
     Steelhead appliance MS SQL protocol support includes the ability to perform prefetching and
     synthetic pre-acknowledgement of queries on database applications. By default, rules that
     increase optimization for Microsoft Project Enterprise Edition ship with the unit. This
     optimization is not enabled by default, and enabling MS SQL optimization without adding
     specific rules will rarely have an effect on any other applications. MS SQL packets must be
     carried in TDS (Tabular Data Stream) format for a Steelhead appliance to be able to perform
     optimization.
     You can also use MS SQL protocol optimization to optimize other database applications, but you
     must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS
     SQL feature for other database applications, contact Riverbed Professional Services.
     Oracle Forms Optimization
     The Oracle Java Initiator (Jinitiator) or Oracle Forms is a browser plug-in program that accesses
     Oracle E-Business application content and Oracle forms applications directly within a web
     browser.
     The Steelhead appliance decrypts, optimizes, and then re-encrypts Oracle Forms native and
     HTTP mode traffic.
     Use Oracle Forms optimization to improve Oracle Forms traffic performance. Oracle Forms does
     not need a separate license and is enabled by default. However, you must also set an in-path rule
     to enable this feature.




© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                11
RCSP Study Guide


     TCP/IP
     General Operation
     Steelhead appliances are typically placed on two ends of the WAN as close to the client and
     server as possible (no additional WAN links between the end node and the Steelhead appliance).
     By placing Steelhead appliances in the network, the TCP session between client and server can
     be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions
     have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all
     traffic from source to destination and back. For any given optimized session, there are three
     distinct sessions. There is a TCP connection between the client and the client-side Steelhead
     appliance, between the server and the server-side Steelhead appliance, and finally a connection
     between the two Steelhead appliances.
     Common Ports
     Ports Used by RiOS
      Port    Type
      7744 Data store sync port
      7800 In-path port
      7801 NAT port
      7810 Out-of-path port
      7820 Failover port for redundant appliances
      7830 Exchange traffic port
      7840 Exchange Director NSPI traffic port
      7850 Connection Forwarding (neighbor) port
      7860 Interceptor Appliance
      7870 Steelhead Mobile

     Interactive Ports Commonly Passed Through by Default on Steelhead Appliances (Partial
     List)
      Port           Type
      7              TCP ECHO
      23             Telnet
      37             UDP/Time
      107            Remote Telnet Service
      179            Border Gateway Protocol
      513            Remote Login
      514            Shell
      1494, 2598     Citrix
      3389           MS WBT, TS/Remote Desktop
      5631           PC Anywhere


12                                                               © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


       Port                  Type
       5900 - 5903           VNC
       600                   X11

     Secure Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List)
       Port              Type
       22/TCP            ssh
       49/TCP            tacacs
       443/TCP           https
       465/TCP           smtps
       563/TCP           nntps
       585/TCP           imap4-ssl
       614/TCP           sshell
       636/TCP           ldaps
       989/TCP           ftps-data
       990/TCP           ftps
       992/TCP           telnets
       993/TCP           imaps
       995/TCP           pop3s
       1701/TCP l2tp
       1723/TCP pptp
       3713/TCP tftp over tls


     RiOS Auto-discovery Process
     Auto-discovery is the process by which the Steelhead appliance automatically intercepts and
     optimizes traffic on all IP addresses and ports. By default, auto-discovery is applied to all IP
     addresses and the ports which are not secure, interactive, or Riverbed well-known ports.
     Packet Flow
     The following diagram shows the first connection packet flow for traffic that is classified as to be
     optimized for the original auto-discovery protocol. The TCP SYN sent by the client is
     intercepted by the Steelhead appliance. A TCP option is attached in the TCP header; this allows
     the remote Steelhead appliance to recognize that there is a Steelhead appliance on the other side
     of the network. When the server-side Steelhead appliance sees the option (also known as a TCP
     probe) it responds to the option by sending a TCP SYN/ACK back. After auto-discovery has
     taken place, the Steelhead appliances continue to set up the TCP inner session and the TCP outer
     sessions.




© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                  13
RCSP Study Guide


      Client                             SH1                                   SH2                                     Server


                   IP(C)→IP(S):SYN
                                                    IP(C)→IP(S):SYN+Probe
                                                                                   Announces service port
                                             IP(S)→IP(C):SYN/ACK+Probe rsp (SH2)   (default = TCP port 7800)
                         Probe result is
                         cached for 10 sec
                                                    IP(SH1)→IP(SH2):SYN

                                                IP(SH2)→IP(SH1):SYN/ACK

                                                    IP(SH1)→IP(SH2):ACK

                                                     Setup Information

                                                                                              IP(C)→IP(S):SYN

                                                                                           IP(S)→IP(C):SYN/ACK
                                                       Connect Result
                                                                                              IP(C)→IP(S):ACK
                IP(S)→IP(C):SYN/ACK
                                             Connect result is
                   IP(C)→IP(S):ACK           cached until failure


                                                                                   Connection Pool:
                                                                                   20x


     TCP Option
     The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side
     Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead
     appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery
     process and not during connection setup between the Steelhead appliances and the outer TCP
     sessions.
     Enhanced Auto-Discovery Process
     In RiOS v4.0.x or later, enhanced auto-discovery (EAD) is available. Enhanced auto-discovery
     automatically discovers the last Steelhead appliance in the network path of the TCP connection.
     In contrast, the original auto-discovery protocol automatically discovers the first Steelhead
     appliance in the path. The difference is only seen in environments where there are three or more
     Steelhead appliances in the network path for connections to be optimized.
     Enhanced auto-discovery works with Steelhead appliances running the original auto-discovery
     protocol. Enhanced auto-discovery ensures that a Steelhead appliance only optimizes TCP
     connections that are being initiated or terminated at its local site, and that a Steelhead appliance
     does not optimize traffic that is transiting through its site.




14                                                                             © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


      Client                                      SH1                                          SH2                                       Server
                                                                                                     We are still using 0x4c but we now
                                                                                                     use two of them (back-to-back)
                        IP(C)→IP(S):SYN SEQ1                                                         Notification is being sent to SH1
                                                              IP(C)→IP(S):SYN SEQ1 +Probe
                                                                                                          IP(C)→IP(S):SYN SEQ2 + Probe
                                                               IP(S)→IP(C):SYN/ACK
                              Probe result is
                              cached for 10 sec                     Notification: not the last SH
                                                                                                            IP(S)→IP(C):SYN/ACK
                                                       IP(S)→IP(C):SYN/ACK+Probe rsp (S-SH)
                             Connect result is                                                                 IP(C)→IP(S):ACK
                                                                 acknum     Connection Result
                             cached until failure
                                                                 IP(SH1)→IP(SH2):SYN
                                                                                                    Listening on
                                                             IP(SH2)→IP(SH1):SYN/ACK                Service Port 7800
                                                                 IP(SH1)→IP(SH2):ACK

                                                                 Setup Information
                     IP(S)→IP(C):SYN/ACK

                       IP(C)→IP(S):ACK



                                                                                                    20x



     Connection Pooling
     General Operation
     By default, all auto-discovered Steelhead appliance peers will have a default connection pool of
     20. The connection pool is a user configurable value which can be configured for each Steelhead
     appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner
     session between the Steelhead appliances across the high latency WAN. By pre-creating these
     sessions between peer Steelhead appliances, when a new connection request is made by a client,
     the client-side Steelhead appliance can simply use the connections in the pool. Once a
     connection is pulled from the pool, a new connection is created to take its place so as to maintain
     the specified number of connections.
     In-path Rules
     General Operation
     In-path rules allow a client-side Steelhead appliance to determine what action to perform when
     intercepting a new client connection (the first TCP SYN packet for a connection). The action
     taken depends on the type of in-path rule selected and is outlined in detail below. It is important
     to note that the rules are matched based on source/destination IP information, destination port,
     and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules
     processing stops when the first rule matching the parameters specified is reached, at which point
     the action selected by the rule is taken. Steelhead appliances have three passthrough rules by
     default, and a fourth implicit rule to auto-discover remote Steelhead appliances. They attempt to
     optimize traffic if the first three rules are not matched by traffic. The three default passthrough
     rules include port groupings matching interactive traffic (i.e., Telnet, VNC, RDP), encrypted
     traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e., 7800, 7810).
     Different Types and Their Function
     • Pass Through. Pass through rules identify traffic that is passed through the network
         unoptimized. For example, you may define pass through rules to exclude subnets from

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                                                        15
RCSP Study Guide


           optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode.
           (Passthrough might occur because of in-path rules, because the connection was established
           before the Steelhead appliance was put in place, or before the Steelhead service was
           enabled.)
     •     Fixed-Target. Fixed-target rules specify out-of-path Steelhead appliances near the target
           server that you want to optimize. Determine which servers you want the Steelhead appliance
           to optimize (and, optionally which ports), and add rules to specify the network of servers,
           ports, port labels, and out-of-path Steelhead appliances to use. Fixed-target rules can also be
           used for in-path deployments for Steelhead appliances not using EAD.
     •     Auto Discover. Auto-discovery is the process by which the Steelhead appliance
           automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-
           discovery is applied to all IP addresses and the ports which are not secure, interactive, or
           default Riverbed ports. Defining in-path rules modifies this default setting.
     •     Discard. Packets for the connection that match the rule are dropped silently. The Steelhead
           appliance filters out traffic that matches the discard rules. This process is similar to how
           routers and firewalls drop disallowed packets; the connection-initiating device has no
           knowledge of the fact that its packets were dropped until the connection times out.
     •     Deny. When packets for connections match the deny rule, the Steelhead appliance actively
           tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset
           the TCP connection being attempted. Using an active reset process rather than a silent
           discard allows the connection initiator to know that its connection is disallowed.
     Peering Rules
     Applicability and Conditions of Use
     Peering Rules
     Configuring peering rules defines what to do when a Steelhead appliance receives an auto-
     discovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited
     to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an
     intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization
     with a client-side Steelhead appliance if it is using a fixed-target rule designating the
     intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a
     fixed-target rule). The following example shows where you might wish to use peering rules:
                    Site A                        Site B                                   Site C
         Client   Steelhead1                    Steelhead2                               Steelhead3         Server 2


                                   WAN 1                             WAN 2




                                                          Server 1

     Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be
     optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as
     Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1
     and Steelhead3.


16                                                                    © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     •    You do not need to define any rules on Steelhead1 or Steelhead3
     •    Add peering rules on Steelhead2 to process connections normally going to Server1 and to
          pass through all other connections so that connections to Server2 are not optimized by
          Steelhead2
     •    A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in
          place by default (by default connection to destination port 7800 is included by port label
          “RBT-Proto”)
     This configuration causes connections going to Server1 to be intercepted by Steelhead2, and
     connections going to anywhere else to be intercepted by another Steelhead appliance (for
     example, Steelhead3 for Server2).
     Overcoming Peering Issues Using Fixed-Target Rules
     If you do not enable automatic peering or define peering rules as described in the previous
     sections, you must define:
     • A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2
     •    A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same
          site as Steelhead1
     •    If you have multiple branches that go through Steelhead2, you must add a fixed-target rule
          for each of them on Steelhead1 and Steelhead3
     Steelhead Appliance Models and Capabilities
     Model Specifications (subject to change)




     Steelhead Appliance Ports
     A Steelhead appliance has Console, AUX, Primary, and WAN and LAN ports.
     •    The Primary and AUX ports cannot share the same network subnet
     •    The Primary and In-path interfaces can share the same network subnet
     •    You must use the Primary port on the server-side for out-of-path deployment


© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                17
RCSP Study Guide


     •   You can not use the Auxiliary port except for management
     •   If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports
         must be connected with straight-through cables
     Interface Naming Conventions
     The interface names for the bypass cards are a combination of the slot number and the port pairs
     (<slot>_<pair>, <slot>_<pair>). For example, if a four-port bypass card is located in slot 0 of
     your appliance, the interface names are: lan0_0, wan0_0, lan0_1, and wan0_1 respectively.
     Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are:
     lan1_0, wan1_0, lan1_1, and wan1_1 respectively.
     The maximum number of copper LAN-WAN pairs (total paths) is ten; two built-in with a four-
     port card, six with two six-port cards, and then two for a four-port card – for a maximum of ten
     pairs.




18                                                                © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     II. Deployment
     Deployment Methods
     Physical In-path
     In a physical in-path deployment, the Steelhead appliance is physically in the direct path network
     traffic will take between clients and servers. The clients and servers continue to see client and
     server IP addresses and the Steelhead appliance bridges unoptimized traffic from its LAN facing
     side to its WAN facing side (and vice versa). Physical in-path configurations are suitable for any
     location where the total bandwidth is within the limits of the installed Steelhead appliance or
     serial cluster of Steelhead appliances. It is generally one of the simplest deployment options and
     among the easiest to maintain.
     Logical In-path
     In a logical in-path deployment, the Steelhead appliance is logically in the path between clients
     and servers. In a logical in-path deployment, clients and servers continue to see client and server
     IP addresses. This deployment differs from a physical in-path deployment in that a packet
     redirection mechanism is used to direct packets to Steelhead appliances that are not in the
     physical path of the client or server.
     Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication
     Protocol (WCCP), and Policy-based Routing (PBR).
     Server-Side Out-of-Path
     A server-side out-of-path deployment is a network configuration in which the Steelhead
     appliance is not in the direct or logical path between the client and the server. Instead, the server-
     side Steelhead appliance is connected through the Primary interface and listens on port 7810 to
     connections coming from client-side Steelhead appliances. In an out-of-path deployment, the
     Steelhead appliance acts as a proxy and does not perform NAT of the client’s IP address as with
     in-path deployments (to allow the server to see the original client IP address), but will instead
     source NAT to the Primary interface address on the Steelhead appliance that is in server-side
     out-of-path. A server-side out-of-path configuration is suitable for data center locations when
     physical in-path or logical in-path configurations are not possible. With server-side out-of-path,
     client IP visibility is no longer available to the server (due to the NAT) and optimization initiated
     from the server side is not possible (since there is no redirection of the outbound connection’s
     packets to the Steelhead appliance).
     Physical Device Cabling
     Steelhead appliances have multiple physical and virtual interfaces. The Primary interface is
     typically used for management purposes, data store synchronization (if applicable), and for
     server-side out-of-path configurations. The Primary interface can be assigned an IP address and
     connected to a switch. You would use a straight-through cable for this configuration.
     The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a
     logical L3 interface is created. This is the “In-path” interface and it is designated a name on a per
     slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one
     LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces
     for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN
     ports respectively.
     Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and
     WAN1_1.

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                    19
RCSP Study Guide


     For a physical in-path deployment, when connecting the LAN and WAN interface to the
     network, both of them are to be treated as a router. When connecting to a router, host, or firewall,
     a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to
     be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover),
     however when using the wrong cables you run the risk of breaking the connection between the
     components the Steelhead appliances placed in-between, especially in bypass. These components
     may not support auto-MDIX.
     For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface
     does not need to be connected and will be shut down automatically as soon as the virtual in-path
     option is enabled in the Steelhead appliances configuration.
     For server-side out-of-path deployments only the Primary interface needs to be connected.
     In-path
     In-path Networks
     Physical in-path configurations are suitable for locations where the total bandwidth is within the
     limits of the installed Steelhead appliance or serial cluster of Steelhead appliances.
     The Steelhead appliance can be physically connected to access both ports and trunks. When the
     Steelhead appliance is placed on a trunk, the In-path interface has to be able to tag its traffic with
     the correct VLAN number. The supported trunking protocol is 802.1q (“Dot1Q”). A tag can be
     assigned via the GUI or the CLI. The CLI command for this is:
     HOSTNAME (config) # in-path interface inpathx_x vlan <id>
     Inter-Steelhead appliance traffic will use this VLAN (except in Full Transparent connections as
     explained below).
     There are several variations of the in-path deployment. Steelhead appliances could be placed in
     series to be redundant. Peering rules based on a peer IP address will have to be applied to both
     Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus
     multiple in-path IP addresses, all addresses will have to be defined to avoid peering.
     A serial cluster is a failover design that can be used to mitigate the risk of possible network
     instabilities and outages caused by a single Steelhead appliance failure (typically caused by
     excessive bandwidth as there is no longer data reduction occurring). When the maximum number
     of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new
     connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept
     the new connections, if it has not reached its maximum number of connections. The in-path
     peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not
     to intercept connections between themselves.
     Appliances in a failover deployment process the peering rules you specify in a spill-over fashion.
     A keepalive method is used between two Steelhead appliances to monitor each others status and
     set a master and backup state for both Steelhead appliances. It is recommended to assign the
     LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from
     Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm
     performance in case of a failure.
     In case the Steelhead appliances are deployed in parallel of each other, measures need to be
     taken to avoid asymmetrical traffic from being passed through without optimization. This usually
     occurs when two or more routing points in the network exist where traffic is spread over the
     links simultaneously. Connection Forwarding can be used to exchange flow information between

20                                                                   © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be
     bundled together.
     WAN Visibility Modes
     WAN visibility pertains to how packets traversing the WAN are addressed. RiOS v5.0 offers
     three types of WAN visibility modes: correct addressing, port transparency, and full address
     transparency.
     You configure WAN visibility on the client-side Steelhead appliance (where the connection is
     initiated). The server-side Steelhead appliance must also support multiple WAN visibility (RiOS
     v5.0 or later).
     Correct Addressing
     Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet
     header fields for optimized traffic in both directions across the WAN. This is the default setting.
     This is “correct” as the devices which are communicating (the TCP endpoints) are the Steelhead
     appliances, so their IP addresses/ports are reflected in the connection.
     Port Transparency
     Port address transparency preserves your server port numbers in the TCP/IP header fields for
     optimized traffic in both directions across the WAN. Traffic is optimized while the server port
     number in the TCP/IP header field appears to be unchanged. Routers and network monitoring
     devices deployed in the WAN segment between the communicating Steelhead appliances can
     view these preserved fields. Use port transparency if you want to manage and enforce QoS
     policies that are based on destination ports. If your WAN router is following traffic classification
     rules written in terms of client and network addresses, port transparency enables your routers to
     use existing rules to classify the traffic without any changes. Port transparency enables network
     analyzers deployed within the WAN (between the Steelhead appliances) to monitor network
     activity and to capture statistics for reporting by inspecting traffic according to its original TCP
     port number. Port transparency does not require dedicated port configurations on your Steelhead
     appliances.
     NOTE: Port transparency only provides server port visibility. It does not provide client and
     server IP address visibility, nor does it provide client port visibility.
     Full Transparency
     Full address transparency preserves your client and server IP addresses and port numbers in the
     TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves
     VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged.
     Routers and network monitoring devices deployed in the WAN segment between the
     communicating Steelhead appliances can view these preserved fields. If both port transparency
     and full address transparency are acceptable solutions, port transparency is preferable. Port
     transparency avoids potential networking risks that are inherent to enabling full address
     transparency. For details, see the Steelhead Appliance Deployment Guide. However, if you must
     see your client or server IP addresses across the WAN, full transparency is your only
     configuration option.
     Out-of-Band (OOB) Splice
     What is the OOB Splice?
     An OOB splice is an independent, separate TCP connection made on the first connection
     between two peer Steelhead appliances used to transfer version, licensing and other OOB data
     between peer Steelhead appliances. An OOB connection must exist between two peers for
© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                  21
RCSP Study Guide


     connections between these peers to be optimized. If the OOB splice dies all optimized
     connections on the peer Steelhead appliances will be terminated.
     The OOB connection is a single connection existing between two Steelhead appliances
     regardless of the direction of flow. So if you open one or more connections in one direction, then
     initiate a connection from the other direction, there will still be only one connection for the OOB
     splice. This connection is made on the first connection between two peer Steelhead appliances
     using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any
     concern except in full transparency deployments.
     Case Study
     In the example below, the Client is trying to establish connection to Server-1:
                                                                       SFE-2           Server-2
                                                                      10.3.0.2         10.3.0.10
                                                                                                       Server-1
                                                                                                       10.2.0.10


                                                                            10.3.0.1

                               10.1.0.1      1.1.1.1            2.2.2.2         10.2.0.1
                                                       WAN

       Client        CFE-1            FW-1                                FW-2                SFE-1
      10.1.0.10     10.1.0.2                                                                 10.2.0.2

     Issue 1: After establishing inner connection, the Client will try to establish an OOB connection
     to the Server-B. It will address it by the IP address reported by Steelhead (SFE-1) which is in
     probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x
     addresses are invalid outside of the firewall (FW-2).
     Resolution 1: In the above example, there is one combination of address and port (IP:port) we
     know about, the connection the client is destined for which is Server-1. The client should be able
     to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create
     a transparent OOB connection from the Client to Server-1 if the corresponding inner connection
     is transparent.
     How to Configure
     There are three options to address the problem of the OOB splice connection established
     mentioned in Issue 1 above.
     In a default configuration the out-of-band connection uses the IP addresses of the client-side
     Steelhead and server-side Steelhead. This is known as correct addressing and is our default
     behavior. However, this configuration will fail in the network topology described above but
     works for the majority of networks. The command below is the default setting in a Steelhead
     appliance’s configuration.
     in-path peering oobtransparency mode none
     In the network topology discussed in Issue 1, the default configuration does not work. There are
     two oobtransparency modes that may work in establishing the peer connections; destination and
     full. When destination mode is used, the client uses the first server IP and port pair to go through
     the Steelhead appliance with which to connect to the server-side Steelhead appliance and the
     client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To
     change to this configuration use the following CLI command:


22                                                                 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

     in-path peering oobtransparency mode destination
     In oobtransparency full mode, the IP of the first client is used and a pre-configured on the client-
     side Steelhead appliance to use port 708. The destination IP and port are the same as in
     destination mode, i.e., that of the server. This is the recommended configuration when VLAN
     transparency is required. To change to this configuration use the following CLI command:
     in-path peering oobtransparency mode full
     To change the default port used the by the client-side Steelhead appliance when
     oobtransparency mode full is configured, use the following CLI command:
     in-path peering oobtransparency port
     It is important to note that these oobtransparency options are only used with full transparency. If
     the first inner-connection to a Steelhead was not transparent, the OOB will always use correct
     addressing.
     Virtual In-path
     Introduction to Virtual In-path Deployments
     In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients
     and servers. Traffic moves in and out of the same WAN interface. This deployment differs from
     a physical in-path deployment in that a packet redirection mechanism is used to direct packets to
     Steelhead appliances that are not in the physical path of the client or server.
     Redirection mechanisms:
     • Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you
        have multiple Steelhead appliances in your network to manage large bandwidth
        requirements.
     •    PBR (Policy-Based Routing). PBR enables you to redirect traffic to a Steelhead appliance
          that is configured as virtual in-path device. PBR allows you to define policies to redirect
          packets instead of relying on routing protocols. You define policies to redirect traffic to the
          Steelhead appliance and policies to avoid loop-back.
     •    WCCP (Web Cache Communication Protocol). WCCP was originally implemented on
          Cisco routers, multi-layer switches, and web caches to redirect HTTP requests to local web
          caches (version 1). Version 2, which is supported on Steelhead appliances, can redirect any
          type of connection from multiple routers or web caches and different ports.
     Policy-Based Routing (PBR)
     Introduction to PBR
     PBR is a router configuration that allows you to define policies to route packets instead of
     relying on routing protocols. It is enabled on an interface basis and packets coming into a PBR-
     enabled interface are checked to see if they match the defined policies. If they do match, the
     packets are routed according to the rule defined for the policy. If they do not match, packets are
     routed based on the usual routing table. The rules can redirect the packets to a specific IP
     address.
     To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is
     arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common
     best practice is to place the Steelhead appliance on a separate subnet.
     One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a
     destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                   23
RCSP Study Guide


     way of tracking whether the PBR next hop is available. You can enable this tracking feature in a
     route map with the following Cisco router command:
     set ip next-hop verify-availability
     With this command, PBR attempts to verify the availability of the next hop using information
     from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR
     checks availability in the following manner:
     1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see
         if the IP address of the next hop appears to be available. If so, it sends an Address Resolution
         Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next
         hop (the Steelhead appliance).
     2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains
         answers from the ARP request for the next hop IP address. If the ARP request fails to obtain
         an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer
         uses the route map to send traffic. This verification provides a failover mechanism.
     In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple
     Tracking Options. In addition to the old method of using CDP information, it allows methods
     such as HTTP and ping to be used to determine whether the PBR next hop is available. Using
     CDP allows you to run with older IOS 12.x versions.
     WCCP Deployments
     Introduction to WCCP
     The WCCP protocol is a stateful language that the router and Steelhead appliance can use to
     redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have
     to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of
     connection parameters will all have to be communicated throughout the cluster that the Steelhead
     appliance and router form upon successful negotiation. The protocol has four messages to
     encompass all of the above functions:
     • HERE_I_AM. Sent by Steelhead appliances to announce themselves.
     • I_SEE_YOU. Sent by WCCP enabled routers to respond to announcements.
     • REDIRECT_ASSIGN. Sent by the designated Steelhead appliance to determine flow
         distribution.
     • REMOVAL_QUERY. Sent by router to check a Steelhead appliance after missed
         HERE_I_AM messages.
     When you configure WCCP on a Steelhead appliance:
     • Routers and Steelhead appliances are added to the same service group.
     • Steelhead appliances announce themselves to the routers.
     • Routers respond with their view of the service group.
     • One Steelhead will be the designated CE (caching engine) and tells the routers how to
         redirect traffic among the Steelhead appliances in the service group.
     How Steelhead Appliances Communicate with Routers
     Steelhead appliances can use one of the following methods to communicate with routers:
     • Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If
         additional routers are added to the service group, they must be added on each Steelhead
         appliance.
     • Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional
         routers are added, you do not need to add or change configuration settings on the Steelhead
         appliances.

24                                                                 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     Redirection
     By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the
     contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an
     ACL that is configured on the router to select the traffic that will be redirected.
     Traffic is redirected using one of the following schemes:
     • GRE (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet
         with the Steelhead appliance IP address configured as the destination. This scheme is
         applicable to any network.
     • L2 (Layer 2). Each packet MAC address is rewritten with a Steelhead appliance MAC
         address. This scheme is possible only if the Steelhead appliance is connected to a router at
         Layer 2.
     • Either. The either value uses L2 first—if Layer 2 is not supported, GRE is used. This is the
         default setting.
     You can configure your Steelhead appliance to not encapsulate return packets. This allows your
     WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send gre-
     return packets, but to actually send l2-return packets. This configuration is optional but
     recommended when connected at L2 directly. The command to override WCCP packet return
     negotiation is wccp l2-return enable. Be sure the network design permits this.
     Load Balancing and Failover
     WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the
     weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to
     distribute traffic for a Service Group across the member Steelhead appliances. It is the
     responsibility of the Service Group's designated Steelhead appliance to assign each router's
     Redirection Hash Table. The designated Steelhead appliance uses a
     WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables.
     This message is generated following a change in Service Group membership and is sent to the
     same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages.
     A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not
     received within five HERE_I_AM_T seconds of a Service Group membership change. The
     HASH algorithm can use several different input fields to come up with an 8 bit output (which is
     the bucket value). Default input fields are source and destination IP address of the packet that is
     redirected. Source and destination TCP port or any combination can be used.
     The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the
     hashing algorithm determines which flow is redirected to which Steelhead appliance. The default
     weight is based on the Steelhead appliance model number. The weight is heavier for models that
     support more connections. You can modify the default weight if desired.
     With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to
     the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active
     Steelhead appliance fails.
     Assignment and Redirection Methods
     The assignment method refers to how a router chooses which Steelhead appliance in a WCCP
     service group to redirect packets to. There are two assignment methods: the Hash assignment
     method and the Mask assignment method. Steelhead appliances support both the Hash
     assignment and Mask assignment methods.
     HASH

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                  25
RCSP Study Guide


     Redirection using Hash assignment is a two-stage process. In the first stage a primary key is
     formed from the packet which is defined by the Service Group and is hashed to yield an index.
     This index number will then be placed into a Redirection Hash Table.
     In the Redirection Hash Table a packet has either an unflagged web-cache, unassigned bucket, or
     a flagged packet. In the event the packet has an unflagged web-cache, the packet is redirected to
     that web-cache. If the bucket is unassigned the packet is forwarded normally. However, if the
     bucket is flagged indicating a secondary hash then a secondary key is formed (as defined by the
     Service Group description). This key is hashed to yield an index number which in turn is placed
     into the Redirection Hash Table. If this secondary entry contains a web-cache index then the
     packet is directed to that web-cache. If the entry is unassigned the packet is forwarded normally.
     MASK
     The first phase of Mask assignment is defining the mask itself. The mask can be up to seven bits
     and can be applied to the SRC TCP port, DST TCP port, source IP address or DST IP address or
     a combination of the four attributes but may not exceed seven bits. Depending on the amount of
     bits selected different number of buckets are created and assigned to the different Steelhead
     appliances in the service group. As traffic traverses the router a bitwise AND operation is
     performed between the mask and the IP address/TCP port depending on the mask defined. The
     traffic is assigned to the different buckets based on the results of the AND operation.
     Mask IP address/TCP port pairs are processed in an order they are received and in turn are
     compared to the seven bits.
     From Internet-Draft WCCP version 2 (http://www.wrec.org/Drafts/draft-wilson-wrec-wccp-v2-
     00.txt ):
     Note that in all of the mask fields of this element a zero means "Don't care”.
     0                    1                   2                         3
          0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |                     Source Address Mask                       |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |                   Destination Address Mask                    |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |      Source Port Mask         |   Destination Port Mask       |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

     •   Source Address Mask. The 32-bit mask to be applied to the source IP address of the packet.
     •   Destination Address Mask. The 32-bit mask to be applied to the destination IP address of
         the packet.
     •   Source Port Mask. The 16-bit mask to be applied to the TCP/UDP source port field of the
         packet.
     •   Destination Port Mask. The 16-bit mask to be applied to the TCP/UDP destination port
         field of the packet.
     It may not be obvious for the details here but there is a priority bit order when using Mask. The
     above diagram reads from most significant to least significant bottom left to top. In other words,
     the priority bits will be source port, destination port, destination address, and source address.
     This is helpful in knowing in the event of troubleshooting which bucket a specific resource is
     allocated.



26                                                                 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     For more information regarding Hash or Mask assignment, refer to the Steelhead Appliance
     Deployment Guide and the whitepaper “WCCP Mask Assignment” provided on the Riverbed
     Partner Portal and/or Riverbed Technical Support site.
     Advanced WCCP Configuration
     Using Multicast Groups
     If you add multiple routers and Steelhead appliances to a service group, you can configure them
     to exchange WCCP protocol messages through a multicast group. Configuring a multicast group
     is advantageous because if a new router is added, it does not need to be explicitly added on each
     Steelhead appliance.
     Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
     Configuring Multicast Groups on the Router
     On the router, at the system prompt, enter the following set of commands:
     Router> enable
     Router# configure terminal
     Router(config)# ip wccp 90 group-address 224.0.0.3
     Router(config)# interface fastEthernet 0/0
     Router(config-if)# ip wccp 90 redirect in
     Router(config-if)# ip wccp 90 group-listen
     Router(config-if)# end
     Router# write memory
     NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
     Configuring Multicast Groups on the Steelhead Appliance
     On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands:
     WCCP Steelhead > enable
     WCCP Steelhead # configure terminal
     WCCP Steelhead (config) # wccp enable
     WCCP Steelhead (config) # wccp mcast-ttl 10
     WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3
     WCCP Steelhead (config) # write memory
     WCCP Steelhead (config) # exit
     Limiting Redirection by TCP Port
     By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to
     tell the router to redirect only certain TCP source or destination ports. You can specify up to a
     maximum of seven ports per service group.
     Using Access Lists for Specific Traffic Redirection
     If redirection is based on traffic characteristics other than ports, you can use ACLs on the router
     to define what traffic is redirected.
     ACL considerations:
     • ACLs are processed in order, from top to bottom. As soon as a particular packet matches a
       statement, it is processed according to that statement and the packet is not evaluated against
       subsequent statements. Therefore, the order of your access-list statements is very important.

© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                 27
RCSP Study Guide


     •   If no port information is explicitly defined, all ports are assumed.
     •   By default all lists include an implied deny all entry at the end, which ensures that traffic that
         is not explicitly included is denied. You cannot change or delete this implied entry.
     Access Lists: Best Practice
     To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL
     that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead
     appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the
     router.
     Verifying and Troubleshooting WCCP Configuration
     Checking the Router Configuration
     On the router, at the system prompt, enter the following set of commands:
     Router>en
     Router#show ip wccp
     Router#show ip wccp 90 detail
     Router#show ip wccp 90 view
     Verifying WCCP Configuration on an Interface
     On the router, at the system prompt, enter the following set of commands:
     Router>en
     Router#show ip interface
     Look for WCCP status messages near the end of the output.
     You can trace WCCP packets and events on the router.
     Checking the Access List Configuration
     On the router, at the system prompt, enter the following set of commands:
     Router>en
     Router#show access-lists <access_list_number>
     Tracing WCCP Packets and Events on the Router
     On the router, at the system prompt, enter the following set of commands:
     Router>en
     Router#debug ip wccp events
     WCCP events debugging is on
     Router#debug ip wccp packets
     WCCP packet info debugging is on
     Router#term mon

     Server-Side Out-of-Path Deployments
     Out-of-path Networks
     An out-of-path deployment is a network configuration in which the Steelhead appliance is not in
     the direct physical or logical path between the client and the server. In an out-of-path
     deployment, the Steelhead appliance acts as a proxy. An out-of-path configuration is suitable for
     data center locations where physical in-path or virtual in-path configurations are not possible.



28                                                                  © 2007-2009 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide


     In an out-of-path deployment, the client-side Steelhead appliance is configured as an in-path
     device, and the server-side Steelhead appliance is configured as an out-of-path device.
     The command to enable server-side out-of-path is:
     HOSTNAME (config) # out-of-path enable




                          LAN I/F           WAN I/F
                                                                     WAN
                                Client-side                                  PRI    IP SRC=S-SH
                                Steelhead
                                                 Fixed-target Rule

                                                                            Server-side
                                                                             Steelhead

     A fixed-target rule is applied on the client-side Steelhead appliance to make sure the TCP session
     is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When
     enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for
     incoming connections from a client-side Steelhead appliance.
     The Steelhead appliance can perform NAT. The server will see the IP address of the Steelhead
     appliance as the source of the connection so the packets are returned to the Steelhead appliance
     instead of the client. This is necessary to make sure that the bidirectional traffic is seen by the
     Steelhead appliance. Also keep in mind that optimization will only occur when the TCP
     connection is initiated by the client.
     Out-of-Path, Failover Deployment
     An out-of-path, failover deployment serves networks where an in-path deployment is not an
     option. This deployment is cost effective, simple to manage, and provides redundancy.
     When both Steelhead appliances are functioning properly, the connections traverse the master
     appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup
     Steelhead appliance. When the master Steelhead appliance is restored, the next connection
     traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is
     passed through unoptimized to the server. The way to do this is to specify multiple target
     appliances in the fixed-target in-path rule on the client-side Steelhead appliance.




© 2007-2009 Riverbed Technology, Inc. All rights reserved.                                                  29
RCSP Study Guide


                                     Data Center LAN




                               Switch

                                                       Server
           WAN
                            Router

                                                           Steelhead A



                                                           Steelhead B




     Hybrid Mode: In-Path and Server-Side Out-of-Path Deployment
     A hybrid mode deployment serves offices with one WAN routing point and users, and where the
     Steelhead appliance must be referenced from remote sites as an out-of-path device (for example,
     to avoid mistaken auto-discovery or to bypass intermediary Steelhead appliances).
     The following figure illustrates the client-side of the network where the Steelhead appliance is
     configured as both an in-path and server-side out-of-path device.

                                        Steelhead   Firewall/VPN
                   Switch
                                                                    WAN
                                       PRI
                                                          DMZ

      Client                          FTP Server                   Web Server


     In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical auto-
     discovery process to optimize any data going to or coming from the clients shown. If however, a
     remote user would like to get optimization to the DMZ shown above, the standard auto-
     discovery process would not function properly since the packet flow would prevent the auto-
     discovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed-target rule
     matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the
     Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due
     to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance
     for optimization on the return path.
     Asymmetric Route Detection
     Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry
     within the network. Asymmetry is detected by the client-side Steelhead appliances. Once
     detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the
     TCP connections to continue to work. The first TCP connection for a pair of addresses might be


30                                                                   © 2007-2009 Riverbed Technology, Inc. All rights reserved.
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1
R C S P  Study Guide 199 01 V2.0.1

Más contenido relacionado

Destacado

Presentation riverbed steelhead appliance main 2010
Presentation   riverbed steelhead appliance main 2010Presentation   riverbed steelhead appliance main 2010
Presentation riverbed steelhead appliance main 2010
chanwitcs
 
Order management, provisioning and activation
Order management, provisioning and activationOrder management, provisioning and activation
Order management, provisioning and activation
VijayIndra Shekhawat
 
Order Management Overview
Order Management OverviewOrder Management Overview
Order Management Overview
Robert Ransom
 
Telecom OSS/BSS Overview
Telecom OSS/BSS OverviewTelecom OSS/BSS Overview
Telecom OSS/BSS Overview
magidg
 

Destacado (14)

SteelCentral NetSensor 3.0
SteelCentral NetSensor 3.0SteelCentral NetSensor 3.0
SteelCentral NetSensor 3.0
 
RiOS 8.5 launch presentation
RiOS 8.5 launch presentationRiOS 8.5 launch presentation
RiOS 8.5 launch presentation
 
SteelHead 8.6
SteelHead 8.6SteelHead 8.6
SteelHead 8.6
 
Modernizing Edge IT with Riverbed SteelFusion
Modernizing Edge IT with Riverbed SteelFusionModernizing Edge IT with Riverbed SteelFusion
Modernizing Edge IT with Riverbed SteelFusion
 
Presentation riverbed steelhead appliance main 2010
Presentation   riverbed steelhead appliance main 2010Presentation   riverbed steelhead appliance main 2010
Presentation riverbed steelhead appliance main 2010
 
VMware & Riverbed
VMware & RiverbedVMware & Riverbed
VMware & Riverbed
 
Riverbed Remote Office/Branch Office IT Survey
Riverbed Remote Office/Branch Office IT SurveyRiverbed Remote Office/Branch Office IT Survey
Riverbed Remote Office/Branch Office IT Survey
 
Next generation OSS/BSS architecture
Next generation OSS/BSS architectureNext generation OSS/BSS architecture
Next generation OSS/BSS architecture
 
Order management, provisioning and activation
Order management, provisioning and activationOrder management, provisioning and activation
Order management, provisioning and activation
 
Telecommunication Business Process - eTOM Flows
Telecommunication Business Process - eTOM FlowsTelecommunication Business Process - eTOM Flows
Telecommunication Business Process - eTOM Flows
 
Order Management Overview
Order Management OverviewOrder Management Overview
Order Management Overview
 
Telecom BSS
Telecom BSSTelecom BSS
Telecom BSS
 
Telecom OSS/BSS Overview
Telecom OSS/BSS OverviewTelecom OSS/BSS Overview
Telecom OSS/BSS Overview
 
Designing Teams for Emerging Challenges
Designing Teams for Emerging ChallengesDesigning Teams for Emerging Challenges
Designing Teams for Emerging Challenges
 

Similar a R C S P Study Guide 199 01 V2.0.1

Deltek productsupportcompatibilitymatrix
Deltek productsupportcompatibilitymatrixDeltek productsupportcompatibilitymatrix
Deltek productsupportcompatibilitymatrix
Darnette A
 
Getting Started on PeopleSoft InstallationJuly 2014.docx
Getting Started on PeopleSoft InstallationJuly 2014.docxGetting Started on PeopleSoft InstallationJuly 2014.docx
Getting Started on PeopleSoft InstallationJuly 2014.docx
gilbertkpeters11344
 

Similar a R C S P Study Guide 199 01 V2.0.1 (20)

Oracle database 12c application express installation guide
Oracle database 12c application express installation guideOracle database 12c application express installation guide
Oracle database 12c application express installation guide
 
Captivate 5 user guide
Captivate 5 user guideCaptivate 5 user guide
Captivate 5 user guide
 
E49322 07
E49322 07E49322 07
E49322 07
 
Odi 12c-getting-started-guide-2032250
Odi 12c-getting-started-guide-2032250Odi 12c-getting-started-guide-2032250
Odi 12c-getting-started-guide-2032250
 
Rmx administrators guide_v8_2
Rmx administrators guide_v8_2Rmx administrators guide_v8_2
Rmx administrators guide_v8_2
 
Dwdm prerequisite
Dwdm prerequisiteDwdm prerequisite
Dwdm prerequisite
 
Deltek productsupportcompatibilitymatrix
Deltek productsupportcompatibilitymatrixDeltek productsupportcompatibilitymatrix
Deltek productsupportcompatibilitymatrix
 
122qpug
122qpug122qpug
122qpug
 
63 x0 manual windrock
63 x0 manual windrock63 x0 manual windrock
63 x0 manual windrock
 
Oracle database 12c 2 day + java developer's guide
Oracle database 12c 2 day + java developer's guideOracle database 12c 2 day + java developer's guide
Oracle database 12c 2 day + java developer's guide
 
Ugps user guide_v_e
Ugps user guide_v_eUgps user guide_v_e
Ugps user guide_v_e
 
Ugps user guide_v_e
Ugps user guide_v_eUgps user guide_v_e
Ugps user guide_v_e
 
Oracle coher
Oracle coherOracle coher
Oracle coher
 
WebLogic Scripting Tool
WebLogic Scripting ToolWebLogic Scripting Tool
WebLogic Scripting Tool
 
E13635
E13635E13635
E13635
 
Viewse um006 -en-e (1)
Viewse um006 -en-e (1)Viewse um006 -en-e (1)
Viewse um006 -en-e (1)
 
Phn 2524 001v000
Phn 2524 001v000Phn 2524 001v000
Phn 2524 001v000
 
Phn 2524 001v000
Phn 2524 001v000Phn 2524 001v000
Phn 2524 001v000
 
Guia implementacion seguridad oracle 12c
Guia implementacion seguridad oracle 12cGuia implementacion seguridad oracle 12c
Guia implementacion seguridad oracle 12c
 
Getting Started on PeopleSoft InstallationJuly 2014.docx
Getting Started on PeopleSoft InstallationJuly 2014.docxGetting Started on PeopleSoft InstallationJuly 2014.docx
Getting Started on PeopleSoft InstallationJuly 2014.docx
 

Más de Johnson Liu

Packet Tracer Simulation Lab Layer3 Routing
Packet Tracer Simulation Lab Layer3 RoutingPacket Tracer Simulation Lab Layer3 Routing
Packet Tracer Simulation Lab Layer3 Routing
Johnson Liu
 
Packet Tracer Simulation Lab Layer 2 Switching
Packet Tracer Simulation Lab Layer 2 SwitchingPacket Tracer Simulation Lab Layer 2 Switching
Packet Tracer Simulation Lab Layer 2 Switching
Johnson Liu
 
MC-LAG Configuration with BGP-base VPLS
MC-LAG Configuration with BGP-base VPLSMC-LAG Configuration with BGP-base VPLS
MC-LAG Configuration with BGP-base VPLS
Johnson Liu
 
2011 TWNIC SP IPv6 Transition
2011 TWNIC SP IPv6 Transition2011 TWNIC SP IPv6 Transition
2011 TWNIC SP IPv6 Transition
Johnson Liu
 

Más de Johnson Liu (16)

Packet Tracer Simulation Lab Layer3 Routing
Packet Tracer Simulation Lab Layer3 RoutingPacket Tracer Simulation Lab Layer3 Routing
Packet Tracer Simulation Lab Layer3 Routing
 
Packet Tracer Simulation Lab Layer 2 Switching
Packet Tracer Simulation Lab Layer 2 SwitchingPacket Tracer Simulation Lab Layer 2 Switching
Packet Tracer Simulation Lab Layer 2 Switching
 
Olive Introduction for TOI
Olive Introduction for TOIOlive Introduction for TOI
Olive Introduction for TOI
 
MC-LAG Configuration with BGP-base VPLS
MC-LAG Configuration with BGP-base VPLSMC-LAG Configuration with BGP-base VPLS
MC-LAG Configuration with BGP-base VPLS
 
Mobile 2G/3G Workshop
Mobile 2G/3G WorkshopMobile 2G/3G Workshop
Mobile 2G/3G Workshop
 
2011 TWNIC SP IPv6 Transition
2011 TWNIC SP IPv6 Transition2011 TWNIC SP IPv6 Transition
2011 TWNIC SP IPv6 Transition
 
CALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast SecurityCALM DURING THE STORM:Best Practices in Multicast Security
CALM DURING THE STORM:Best Practices in Multicast Security
 
SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
 
ISSU A PLANNED UPGRADE TOOL
ISSU A PLANNED UPGRADE TOOLISSU A PLANNED UPGRADE TOOL
ISSU A PLANNED UPGRADE TOOL
 
CONTINUOUS SYSTEMS, NONSTOP OPERATIONS WITH JUNOS
CONTINUOUS SYSTEMS, NONSTOP OPERATIONS WITH JUNOSCONTINUOUS SYSTEMS, NONSTOP OPERATIONS WITH JUNOS
CONTINUOUS SYSTEMS, NONSTOP OPERATIONS WITH JUNOS
 
NG MVPN BGP ROUTE TYPES AND ENCODINGS
NG  MVPN BGP ROUTE TYPES AND ENCODINGSNG  MVPN BGP ROUTE TYPES AND ENCODINGS
NG MVPN BGP ROUTE TYPES AND ENCODINGS
 
Emerging Multicast VPN Applications
Emerging  Multicast  VPN  ApplicationsEmerging  Multicast  VPN  Applications
Emerging Multicast VPN Applications
 
Introduction to IGMP for IPTV Networks
Introduction to IGMP for IPTV NetworksIntroduction to IGMP for IPTV Networks
Introduction to IGMP for IPTV Networks
 
Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)
 
術業有專攻,認證會說話
術業有專攻,認證會說話術業有專攻,認證會說話
術業有專攻,認證會說話
 
Cisco專業認證介紹
Cisco專業認證介紹Cisco專業認證介紹
Cisco專業認證介紹
 

R C S P Study Guide 199 01 V2.0.1

  • 1. Riverbed Certified Solutions Professional (RCSP) Study Guide Exam 199-01 for RiOS v5.0 June, 2009 Version 2.0
  • 2. RCSP Study Guide COPYRIGHT © 2007-2009 Riverbed Technology, Inc. ALL RIGHTS RESERVED All content in this manual, including text, graphics, logos, icons, and images, is the exclusive property of Riverbed Technology, Inc. (“Riverbed”) and is protected by U.S. and international copyright laws. The compilation (meaning the collection, arrangement, and assembly) of all content in this manual is the exclusive property of Riverbed and is also protected by U.S. and international copyright laws. The content in this manual may be used as a resource. Any other use, including the reproduction, modification, distribution, transmission, republication, display, or performance, of the content in this manual is strictly prohibited. TRADEMARKS RIVERBED TECHNOLOGY, RIVERBED, STEELHEAD, RiOS, INTERCEPTOR, and the Riverbed logo are trademarks or registered trademarks of Riverbed. All other trademarks mentioned in this manual are the property of their respective owners. The trademarks and logos displayed in this manual may not be used without the prior written consent of Riverbed or their respective owners. PATENTS Portions, features and/or functionality of Riverbed's products are protected under Riverbed patents, as well as patents pending. DISCLAIMER THIS MANUAL IS PROVIDED BY RIVERBED ON AN "AS IS" BASIS. RIVERBED MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED OR REFERENCED IN THE MANUAL. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, RIVERBED DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. Although Riverbed has attempted to provide accurate information in this manual, Riverbed assumes no responsibility for the accuracy or completeness of the information. Riverbed may change the programs or products mentioned in this manual at any time without notice, but Riverbed makes no commitment to update the programs or products mentioned in this manual in any respect. Mention of non-Riverbed products or services is for information purposes only and constitutes neither an endorsement nor a recommendation. RIVERBED WILL NOT BE LIABLE UNDER ANY THEORY OF LAW, FOR ANY INDIRECT, INCIDENTAL, PUNITIVE OR CONSEQUENTIAL DAMAGES, INCLUDING, BUT NOT LIMITED TO, LOSS OF PROFITS, BUSINESS INTERRUPTION, LOSS OF INFORMATION OR DATA OR COSTS OF REPLACEMENT GOODS, ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL OR ANY RIVERBED PRODUCT OR RESULTING FROM USE OF OR RELIANCE ON THE INFORMATION PRESENT, EVEN IF RIVERBED MAY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. CONFIDENTIAL INFORMATION The information in this manual is considered Confidential Information (as defined in the Reseller Agreement entered with Riverbed or in the Riverbed License Agreement currently available at www.riverbed.com/license, as applicable). © 2007-2009 Riverbed Technology, Inc. All rights reserved. 1
  • 3. RCSP Study Guide Table of Contents Preface ..................................................................................................................................................................................................................... 4 Certification Overview ............................................................................................................................................................................................ 4 Benefits of Certification......................................................................................................................................................................................... 4 Exam Information.................................................................................................................................................................................................. 4 Certification Checklist ........................................................................................................................................................................................... 5 Recommended Resources for Study.................................................................................................................................................................... 5 RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 7 I. General Knowledge ............................................................................................................................................................................................. 7 Optimizations Performed by RiOS........................................................................................................................................................................ 7 TCP/IP ................................................................................................................................................................................................................ 12 Common Ports.................................................................................................................................................................................................... 12 RiOS Auto-discovery Process ............................................................................................................................................................................ 13 Enhanced Auto-Discovery Process .................................................................................................................................................................... 14 Connection Pooling............................................................................................................................................................................................. 15 In-path Rules ...................................................................................................................................................................................................... 15 Peering Rules ..................................................................................................................................................................................................... 16 Steelhead Appliance Models and Capabilities ................................................................................................................................................... 17 II. Deployment ....................................................................................................................................................................................................... 19 In-path................................................................................................................................................................................................................. 20 Out-of-Band (OOB) Splice .................................................................................................................................................................................. 21 Virtual In-path ..................................................................................................................................................................................................... 23 Policy-Based Routing (PBR)............................................................................................................................................................................... 23 WCCP Deployments........................................................................................................................................................................................... 24 Advanced WCCP Configuration ......................................................................................................................................................................... 27 Server-Side Out-of-Path Deployments ............................................................................................................................................................... 28 Asymmetric Route Detection .............................................................................................................................................................................. 30 Connection Forwarding....................................................................................................................................................................................... 31 Simplified Routing (SR) ...................................................................................................................................................................................... 32 Data Store Synchronization ................................................................................................................................................................................ 33 CIFS Prepopulation ............................................................................................................................................................................................ 33 Authentication and Authorization........................................................................................................................................................................ 33 SSL ..................................................................................................................................................................................................................... 34 Central Management Console (CMC) ................................................................................................................................................................ 35 Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client) ....................................................................................... 36 Interceptor Appliance.......................................................................................................................................................................................... 37 III. Features ............................................................................................................................................................................................................ 40 Feature Licensing ............................................................................................................................................................................................... 40 HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 40 MX-TCP .............................................................................................................................................................................................................. 42 Quality of Service................................................................................................................................................................................................ 42 PFS (Proxy File Service) Deployments .............................................................................................................................................................. 45 NetFlow............................................................................................................................................................................................................... 51 IPSec .................................................................................................................................................................................................................. 53 Operation on VLAN Tagged Links...................................................................................................................................................................... 53 IV. Troubleshooting .............................................................................................................................................................................................. 54 Common Deployment Issues.............................................................................................................................................................................. 54 Reporting and Monitoring ................................................................................................................................................................................... 56 Troubleshooting Best Practices.......................................................................................................................................................................... 59 2 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 4. RCSP Study Guide V. Exam Questions ............................................................................................................................................................................................... 61 Types of Questions............................................................................................................................................................................................. 61 Sample Questions .............................................................................................................................................................................................. 61 VI. Appendix .......................................................................................................................................................................................................... 65 Acronyms and Abbreviations .............................................................................................................................................................................. 65 © 2007-2009 Riverbed Technology, Inc. All rights reserved. 3
  • 5. RCSP Study Guide Preface This Riverbed Certification Study Guide is intended for anyone who wants to become certified in the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed Certified Solutions Professional (RCSP) program is designed to validate the skills required of technical professionals who work in the implementation of Riverbed products. This study guide provides a combination of theory and practical experience needed for a general understanding of the subject matter. It also provides sample questions that will help in the evaluation of personal progress and provide familiarity with the types of questions that will be encountered in the exam. This publication does not replace practical experience, nor is it designed to be a stand-alone guide for any subject. Instead, it is an effective tool that, when combined with education activities and experience, can be a very useful preparation guide for the exam. Certification Overview The Riverbed Certified Solutions Professional certificate is granted to individuals who demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment & Management course in addition to having hands-on experience in performing deployment, troubleshooting, and maintenance of RiOS products in small, medium, and large organizations. While there are no set requirements prior to taking the exam, candidates who have taken a Riverbed authorized training class and have at least six months of hands-on experience with RiOS products have a significantly higher chance of receiving the certification. We would like to emphasize that solely taking the class will not adequately prepare you for the exam. To obtain the RCSP certification, you are required to pass a computerized exam available at any Pearson VUE testing center worldwide. Benefits of Certification 1. Establishes your credibility as a knowledgeable and capable individual in regard to Riverbed's products and services. 2. Helps improve your career advancement potential. 3. Qualifies you for discounts and/or benefits for Riverbed sponsored events and training. 4. Entitles you to use the RCSP certification logo on your business card. Exam Information Exam Specifications • Exam Number: 199-01 • Exam Name: Riverbed Certified Solutions Professional • Version of RiOS: Up to RiOS version 5.0 for the Steelhead appliances and the Central Management Console, and Interceptor 2.0 and Steelhead Mobile 2.0 • Number of Questions: 65 • Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total) • Exam Provider: Pearson VUE • Exam Language: English only. Riverbed allows a 30-minute time extension for English exams taken in non-English speaking countries for students that request it. English speaking countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland, 4 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 6. RCSP Study Guide South Africa, and the United States. A form will need to be completed by the candidate and submitted to Pearson VUE. • Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or ADA accommodations; includes time extensions and/or a reader) • Offered Locations: Worldwide (over 5000 test centers in 165 countries) • Pre-requisites: None (although taking a Riverbed training class is highly recommended) • Available to: Everyone (partners, customers, employees, etc) • Passing Score: 700 out of 1000 (70%) • Certification Expires: Every 2 years (must recertify every 2 years, no grace period) • Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams. • Cost: $150.00 (USD) • Number of Attempts Allowed: Unlimited (though statistics are kept) Certification Checklist As the RCSP exam is geared towards individuals who have both the theoretical knowledge and hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial towards passing the exam. For individuals starting out with the process, we recommend the following steps to guide you along the way: 1. Building Theoretical Knowledge The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting the RiOS product suite is to take a Riverbed authorized training class. To ensure the greatest possibility of passing the exam, it is recommended that you review the RCSP Study Guide and ensure your familiarity with all topics listed, prior to any examination attempts. 2. Gaining Hands-on Experience While the theoretical knowledge will get you partway there, it is the hands-on knowledge that can get you over the top and enable you to pass the exam. Since all deployments are different, providing an exact amount of experience required is difficult. Generally, we recommend that resellers and partners perform at least five deployments in a variety of technologies prior to attempting the exam. For customers, and alternatively for resellers and partners, starting from the design and deployment phase and having at least six months of experience in a production environment would be beneficial. 3. Taking the Exam The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing center. To register for any Riverbed Certification exam, please visit http://www.pearsonvue.com/riverbed. Recommended Resources for Study Riverbed Training Courses Information on Riverbed Training can be found at: http://www.riverbed.com/services/training/. • Steelhead Appliance Deployment & Management • Steelhead Appliance Operations & L1/L2 Troubleshooting • Steelhead Mobile Installation & Configuration • Central Management Console Configuration & Operations • Interceptor Appliance Installation & Configuration © 2007-2009 Riverbed Technology, Inc. All rights reserved. 5
  • 7. RCSP Study Guide • Steelhead Appliance Advanced Deployment & Troubleshooting Publications Recommended Reading (In No Particular Order) • This study guide • Riverbed documentation o Steelhead Management Console User's Guide o Steelhead Command-Line Interface Reference Guide o Steelhead Appliance Deployment Guide o Steelhead Appliance Installation Guide o Bypass Card Installation Guide o Steelhead Mobile Controller User’s Guide o Steelhead Mobile Controller Installation Guide o Central Management Console User's Guide o Central Management Console Installation Guide o Interceptor Appliance User's Guide o Interceptor Appliance Installation Guide Other Reading (URLs Subject to Change) • http://www.ietf.org/rfc.html o RFC 793 (Original TCP RFC) o RFC 1323 TCP extensions for high performance o RFC 3649 (HighSpeed TCP for Large Congestion Windows) o RFC 3742 (Limited Slow-Start for TCP with Large Congestion Windows) o RFC 2474 (Differentiated Services Code Point) • http://www.caida.org/tools/utilities/flowscan/arch.xml (NetFlow Protocol and Record Headers) • http://ubiqx.org/cifs/Intro.html (CIFS) • Microsoft Windows 2000 Server Administrator’s Companion by Charlie Russell and Sharon Crawford (Microsoft Press, 2000) • Common Internet File System (CIFS) Technical Reference by the Storage Networking Industry Association (Storage Networking Industry Association, 2002) • TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994) • Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000) 6 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 8. RCSP Study Guide RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE The Riverbed Certified Solutions Professional exam, and therefore this study guide, covers the Riverbed products and technologies through RiOS version 5.0 only (Interceptor 2.0 and Steelhead Mobile 2.0 as well). I. General Knowledge Optimizations Performed by RiOS Optimization is the process of increasing data throughput and network performance over the WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it traverses the WAN. The optimization techniques RiOS utilizes are: • Data Streamlining • Transport Streamlining • Application Streamlining • Management Streamlining You should be familiar with the differences in these streamlining techniques for the RCSP test. This information can be found in the Steelhead Appliance Deployment Guide. Transaction Acceleration (TA) TA is composed of the following optimization mechanisms: • A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR) • A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with references that represent arbitrary amounts of data, thus increasing the client-data per WAN TCP window • A latency reduction and avoidance mechanism called Transaction Prediction SDR and TP can work independently or in conjunction with one another depending on the characteristics and workload of the data sent across the network. The results of the optimization vary, but often result in throughput improvements in the range of 10 to 100 times over unaccelerated links. Scalable Data Referencing (SDR) Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer Steelhead appliance uses this reference to reconstruct the original data in the TCP data stream. Data and references are maintained in persistent storage in the data store within each Steelhead appliance. Because SDR checks data chunks byte-by-byte there are no consistency issues even in the presence of replicated data. How Does SDR Work? When data is sent for the first time across a network (no commonality with any file ever sent before), all data and references are new and are sent to the Steelhead appliance on the other side of the network. This new data and the accompanying references are compressed using conventional algorithms so as to improve performance, even on the first transfer. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 7
  • 9. RCSP Study Guide Over time, more data crosses the network (revisions of a document for example). Thereafter, when these new requests are sent across the network, the data is compared with references that already exist in the local data store. Any data that the Steelhead appliance determines already exists on the far side of the network are not sent—only the references are sent across the network. As files are copied, edited, renamed, and otherwise changed or moved (as well as web pages being viewed or email sent), the Steelhead appliance continually builds the data store to include more and more data and references. References can be shared by different files and by files in different applications if the underlying bits are common to both. Since SDR can operate on all TCP-based protocols, data commonality across protocols can be leveraged so long as the binary representation of that data does not change between the protocols. For example, when a file transferred via FTP is then transferred using WFS (Windows File System), the binary representation of the file is basically the same and thus references can be sent for that file. Lempel-Ziv (LZ) Compression SDR and compression are two different features and can be controlled separately. However, LZ compression is the primary form of data reduction for cold transfers. The Lempel-Ziv compression methods are among the most popular algorithms for lossless storage. Compression is turned on by default. In-path rules can be used to define which optimization features will be used for which set of traffic flowing through the Steelhead appliance. TCP Optimizations & Virtual Window Expansion (VWE) As Steelhead appliances are designed to optimize data transfers across wide area networks, they make extensive use of standards-based enhancements to the TCP protocol that may not be present in the TCP stack of many desktop and server operating systems. This includes improved transport capability for networks with high bandwidth delay products via the use of HighSpeed TCP, MX-TCP, or TCP Vegas for lower bandwidth links, partial acknowledgements, and other more obscure but throughput enhancing and latency reducing features. VWE allows Steelhead appliances to repack TCP payloads with references that represent arbitrary amounts of data. This is possible because Steelhead appliances operate at the Application Layer and terminate TCP, which gives them more flexibility in the way they optimize WAN traffic. Essentially, the TCP payload is increased from its normal window size to an arbitrarily large amount dependent on the compression ratio for the connection. Because of this increased payload, a given application that relies on TCP performance (for example, HTTP or FTP) takes fewer trips across the WAN to accomplish the same task. For example, consider a client-to- server connection that may have a 64KB TCP window. In the event that there is 256KB of data to transfer, it would take several TCP windows to accomplish this in a network with high latency. With SDR however, that 256KB of data can be potentially reduced to fit inside a single TCP window, removing the need to wait for acknowledgements to be sent prior to sending the next window, and thus speed the transfer. Transaction Prediction Application-level latency optimization is delivered through the Transaction Prediction module. Transaction Prediction leverages an intimate understanding of protocol semantics to reduce the chattiness that would normally occur over the WAN. By acting on foreknowledge of specific protocol request-response mechanisms, Steelhead appliances streamline the delivery of data that 8 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 10. RCSP Study Guide would normally be delivered in small increments through large numbers of interactions between the client and server over the WAN. As transactions are executed between the client and server, the Steelhead appliance intercepts each transaction, compares it to the database of past transactions, and makes decisions about the probability of future events. Based on this model, if a Steelhead appliance determines there is a high likelihood of a future transaction occurring, it performs that transaction, rather than waiting for the response from the server to propagate back to the client and then back to the server. Dramatic performance improvements result from the time saved by not waiting for each serial transaction to arrive prior to making the next request. Instead, the transactions are pipelined one right after the other. Of course, transactions are executed by Steelhead appliances ahead of the client only when it is safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the underlying protocols to know when it is safe to do so. Fortunately, a wide range of common applications have very predictable behaviors and, consequently, Transaction Prediction can enhance WAN performance significantly. When combined with SDR, Transaction Prediction can improve WAN performance up to 100 times. Common Internet File System (CIFS) Optimization CIFS is a proposed standard protocol that lets programs make requests for files and services on remote computers over the Internet. CIFS uses the client/server programming model. A client program makes a request of a server program (usually in another computer) for access to a file or to pass a message to a program that runs in the server computer. The server takes the requested action and returns a response. CIFS is a public or open variation of the Server Message Block (SMB) protocol developed and used by Microsoft. In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you would only disable CIFS optimization to troubleshoot the system. Overlapping Opens Due to the way certain applications handle the opening of files, file locks are not properly granted to the application in such a way that would allow a Steelhead appliance to optimize access to that file using Transaction Prediction. To prevent any compromise to data integrity, the Steelhead appliance only optimizes data to which exclusive access is available (in other words, when locks are granted). When an opportunistic lock (oplock) is not available, the Steelhead appliance does not perform application-level latency optimizations but still performs SDR and compression on the data as well as TCP optimizations. The CIFS overlapping opens feature remedies this problem by having the server-side Steelhead handle file locking operations on behalf of the requesting application. If you disable this feature, the Steelhead appliance will still increase WAN performance, but not as effectively. Enabling this feature on applications that perform multiple opens of the same file to complete an operation will result in a performance improvement (for example, CAD applications). NOTE: For the Steelhead appliance to handle the locking properly, all transactions on the file must be optimized by that Steelhead appliance. Therefore, if a remote user opens a file which is optimized using the overlapping opens feature, and a second user opens the same file they might receive an error if the file fails to go through a Steelhead appliance or if it does not go through the Steelhead appliance (for example, certain applications that are sent over the LAN). If this occurs, you should disable overlapping opens optimizations for those applications. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 9
  • 11. RCSP Study Guide Messaging Application Programming Interface (MAPI) Optimization MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI optimization. Typically, you disable MAPI optimization to troubleshoot problems with the system. For example, if you are experiencing problems with Microsoft Outlook clients connecting to Exchange, you can disable MAPI latency acceleration (while continuing to optimize with SDR for MAPI). • Read ahead on attachments • Read ahead on large emails • Write behind on attachments • Write behind on large emails • Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no Transaction Prediction) MAPI Prepopulation Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance will keep an open connection to the server-side Steelhead appliance and the server-side Steelhead appliance will keep the connection open to the server. This allows for data to be pushed through the data store before the user logs on to the server again. The default timer is set to 96 hours, after that, the connection will be reset. • Optimized MAPI connections held open after client exit (acts like the client left the PC on); think of it as virtual client • Keep reading mail until timeout • No one is ever reconnected to the prepopulation session (including the original user) • No need for more Client Access Licenses (CALs); no agents to deploy • Can configure frequency check and timeout or to disable it • Enables transmission during off times even in consolidated environments • The feature can be disabled independently from other MAPI optimizations HTTP Optimization A typical web page is not a single file that is downloaded all at once. Instead, web pages are composed of dozens of separate objects—including .jpg and .gif images, JavaScript code, cascading style sheets, and more—each of which must be requested and retrieved separately, one after the other. Given the presence of latency, this behavior is highly detrimental to the performance of web-based applications over the WAN. The higher the latency, the longer it takes to fetch each individual object and, ultimately, to display the entire page. RiOS v5.0 and later optimizes web applications using: • Parsing and Prefetching of Dynamic Content • URL Learning 10 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 12. RCSP Study Guide • Removal of Unfetchable Objects • HTTP Metadata Responses • Persistent Connections More information can be found in the Steelhead Appliance Management Console User’s Guide. NFS Optimization You can configure Steelhead appliances to use Transaction Prediction to perform application- level latency optimization on NFS. Application-level latency optimization improves NFS performance over high latency WANs. NFS latency optimization optimizes TCP connections and is only supported for NFS v3. You can configure NFS settings globally for all servers and volumes, or you can configure NFS settings that are specific to particular servers or volumes. When you configure NFS settings for a server, the settings are applied to all volumes on that server unless you override settings for specific volumes. • Read-ahead and read caching (checks freshness with modify date) • Write-behind • Metadata prefetching and caching • Convert multiple requests into one larger request • Special symbolic link handling Microsoft SQL Optimization Steelhead appliance MS SQL protocol support includes the ability to perform prefetching and synthetic pre-acknowledgement of queries on database applications. By default, rules that increase optimization for Microsoft Project Enterprise Edition ship with the unit. This optimization is not enabled by default, and enabling MS SQL optimization without adding specific rules will rarely have an effect on any other applications. MS SQL packets must be carried in TDS (Tabular Data Stream) format for a Steelhead appliance to be able to perform optimization. You can also use MS SQL protocol optimization to optimize other database applications, but you must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS SQL feature for other database applications, contact Riverbed Professional Services. Oracle Forms Optimization The Oracle Java Initiator (Jinitiator) or Oracle Forms is a browser plug-in program that accesses Oracle E-Business application content and Oracle forms applications directly within a web browser. The Steelhead appliance decrypts, optimizes, and then re-encrypts Oracle Forms native and HTTP mode traffic. Use Oracle Forms optimization to improve Oracle Forms traffic performance. Oracle Forms does not need a separate license and is enabled by default. However, you must also set an in-path rule to enable this feature. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 11
  • 13. RCSP Study Guide TCP/IP General Operation Steelhead appliances are typically placed on two ends of the WAN as close to the client and server as possible (no additional WAN links between the end node and the Steelhead appliance). By placing Steelhead appliances in the network, the TCP session between client and server can be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all traffic from source to destination and back. For any given optimized session, there are three distinct sessions. There is a TCP connection between the client and the client-side Steelhead appliance, between the server and the server-side Steelhead appliance, and finally a connection between the two Steelhead appliances. Common Ports Ports Used by RiOS Port Type 7744 Data store sync port 7800 In-path port 7801 NAT port 7810 Out-of-path port 7820 Failover port for redundant appliances 7830 Exchange traffic port 7840 Exchange Director NSPI traffic port 7850 Connection Forwarding (neighbor) port 7860 Interceptor Appliance 7870 Steelhead Mobile Interactive Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List) Port Type 7 TCP ECHO 23 Telnet 37 UDP/Time 107 Remote Telnet Service 179 Border Gateway Protocol 513 Remote Login 514 Shell 1494, 2598 Citrix 3389 MS WBT, TS/Remote Desktop 5631 PC Anywhere 12 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 14. RCSP Study Guide Port Type 5900 - 5903 VNC 600 X11 Secure Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List) Port Type 22/TCP ssh 49/TCP tacacs 443/TCP https 465/TCP smtps 563/TCP nntps 585/TCP imap4-ssl 614/TCP sshell 636/TCP ldaps 989/TCP ftps-data 990/TCP ftps 992/TCP telnets 993/TCP imaps 995/TCP pop3s 1701/TCP l2tp 1723/TCP pptp 3713/TCP tftp over tls RiOS Auto-discovery Process Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-discovery is applied to all IP addresses and the ports which are not secure, interactive, or Riverbed well-known ports. Packet Flow The following diagram shows the first connection packet flow for traffic that is classified as to be optimized for the original auto-discovery protocol. The TCP SYN sent by the client is intercepted by the Steelhead appliance. A TCP option is attached in the TCP header; this allows the remote Steelhead appliance to recognize that there is a Steelhead appliance on the other side of the network. When the server-side Steelhead appliance sees the option (also known as a TCP probe) it responds to the option by sending a TCP SYN/ACK back. After auto-discovery has taken place, the Steelhead appliances continue to set up the TCP inner session and the TCP outer sessions. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 13
  • 15. RCSP Study Guide Client SH1 SH2 Server IP(C)→IP(S):SYN IP(C)→IP(S):SYN+Probe Announces service port IP(S)→IP(C):SYN/ACK+Probe rsp (SH2) (default = TCP port 7800) Probe result is cached for 10 sec IP(SH1)→IP(SH2):SYN IP(SH2)→IP(SH1):SYN/ACK IP(SH1)→IP(SH2):ACK Setup Information IP(C)→IP(S):SYN IP(S)→IP(C):SYN/ACK Connect Result IP(C)→IP(S):ACK IP(S)→IP(C):SYN/ACK Connect result is IP(C)→IP(S):ACK cached until failure Connection Pool: 20x TCP Option The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery process and not during connection setup between the Steelhead appliances and the outer TCP sessions. Enhanced Auto-Discovery Process In RiOS v4.0.x or later, enhanced auto-discovery (EAD) is available. Enhanced auto-discovery automatically discovers the last Steelhead appliance in the network path of the TCP connection. In contrast, the original auto-discovery protocol automatically discovers the first Steelhead appliance in the path. The difference is only seen in environments where there are three or more Steelhead appliances in the network path for connections to be optimized. Enhanced auto-discovery works with Steelhead appliances running the original auto-discovery protocol. Enhanced auto-discovery ensures that a Steelhead appliance only optimizes TCP connections that are being initiated or terminated at its local site, and that a Steelhead appliance does not optimize traffic that is transiting through its site. 14 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 16. RCSP Study Guide Client SH1 SH2 Server We are still using 0x4c but we now use two of them (back-to-back) IP(C)→IP(S):SYN SEQ1 Notification is being sent to SH1 IP(C)→IP(S):SYN SEQ1 +Probe IP(C)→IP(S):SYN SEQ2 + Probe IP(S)→IP(C):SYN/ACK Probe result is cached for 10 sec Notification: not the last SH IP(S)→IP(C):SYN/ACK IP(S)→IP(C):SYN/ACK+Probe rsp (S-SH) Connect result is IP(C)→IP(S):ACK acknum Connection Result cached until failure IP(SH1)→IP(SH2):SYN Listening on IP(SH2)→IP(SH1):SYN/ACK Service Port 7800 IP(SH1)→IP(SH2):ACK Setup Information IP(S)→IP(C):SYN/ACK IP(C)→IP(S):ACK 20x Connection Pooling General Operation By default, all auto-discovered Steelhead appliance peers will have a default connection pool of 20. The connection pool is a user configurable value which can be configured for each Steelhead appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner session between the Steelhead appliances across the high latency WAN. By pre-creating these sessions between peer Steelhead appliances, when a new connection request is made by a client, the client-side Steelhead appliance can simply use the connections in the pool. Once a connection is pulled from the pool, a new connection is created to take its place so as to maintain the specified number of connections. In-path Rules General Operation In-path rules allow a client-side Steelhead appliance to determine what action to perform when intercepting a new client connection (the first TCP SYN packet for a connection). The action taken depends on the type of in-path rule selected and is outlined in detail below. It is important to note that the rules are matched based on source/destination IP information, destination port, and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules processing stops when the first rule matching the parameters specified is reached, at which point the action selected by the rule is taken. Steelhead appliances have three passthrough rules by default, and a fourth implicit rule to auto-discover remote Steelhead appliances. They attempt to optimize traffic if the first three rules are not matched by traffic. The three default passthrough rules include port groupings matching interactive traffic (i.e., Telnet, VNC, RDP), encrypted traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e., 7800, 7810). Different Types and Their Function • Pass Through. Pass through rules identify traffic that is passed through the network unoptimized. For example, you may define pass through rules to exclude subnets from © 2007-2009 Riverbed Technology, Inc. All rights reserved. 15
  • 17. RCSP Study Guide optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode. (Passthrough might occur because of in-path rules, because the connection was established before the Steelhead appliance was put in place, or before the Steelhead service was enabled.) • Fixed-Target. Fixed-target rules specify out-of-path Steelhead appliances near the target server that you want to optimize. Determine which servers you want the Steelhead appliance to optimize (and, optionally which ports), and add rules to specify the network of servers, ports, port labels, and out-of-path Steelhead appliances to use. Fixed-target rules can also be used for in-path deployments for Steelhead appliances not using EAD. • Auto Discover. Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto- discovery is applied to all IP addresses and the ports which are not secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting. • Discard. Packets for the connection that match the rule are dropped silently. The Steelhead appliance filters out traffic that matches the discard rules. This process is similar to how routers and firewalls drop disallowed packets; the connection-initiating device has no knowledge of the fact that its packets were dropped until the connection times out. • Deny. When packets for connections match the deny rule, the Steelhead appliance actively tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset the TCP connection being attempted. Using an active reset process rather than a silent discard allows the connection initiator to know that its connection is disallowed. Peering Rules Applicability and Conditions of Use Peering Rules Configuring peering rules defines what to do when a Steelhead appliance receives an auto- discovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization with a client-side Steelhead appliance if it is using a fixed-target rule designating the intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a fixed-target rule). The following example shows where you might wish to use peering rules: Site A Site B Site C Client Steelhead1 Steelhead2 Steelhead3 Server 2 WAN 1 WAN 2 Server 1 Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1 and Steelhead3. 16 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 18. RCSP Study Guide • You do not need to define any rules on Steelhead1 or Steelhead3 • Add peering rules on Steelhead2 to process connections normally going to Server1 and to pass through all other connections so that connections to Server2 are not optimized by Steelhead2 • A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in place by default (by default connection to destination port 7800 is included by port label “RBT-Proto”) This configuration causes connections going to Server1 to be intercepted by Steelhead2, and connections going to anywhere else to be intercepted by another Steelhead appliance (for example, Steelhead3 for Server2). Overcoming Peering Issues Using Fixed-Target Rules If you do not enable automatic peering or define peering rules as described in the previous sections, you must define: • A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2 • A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same site as Steelhead1 • If you have multiple branches that go through Steelhead2, you must add a fixed-target rule for each of them on Steelhead1 and Steelhead3 Steelhead Appliance Models and Capabilities Model Specifications (subject to change) Steelhead Appliance Ports A Steelhead appliance has Console, AUX, Primary, and WAN and LAN ports. • The Primary and AUX ports cannot share the same network subnet • The Primary and In-path interfaces can share the same network subnet • You must use the Primary port on the server-side for out-of-path deployment © 2007-2009 Riverbed Technology, Inc. All rights reserved. 17
  • 19. RCSP Study Guide • You can not use the Auxiliary port except for management • If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports must be connected with straight-through cables Interface Naming Conventions The interface names for the bypass cards are a combination of the slot number and the port pairs (<slot>_<pair>, <slot>_<pair>). For example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are: lan0_0, wan0_0, lan0_1, and wan0_1 respectively. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are: lan1_0, wan1_0, lan1_1, and wan1_1 respectively. The maximum number of copper LAN-WAN pairs (total paths) is ten; two built-in with a four- port card, six with two six-port cards, and then two for a four-port card – for a maximum of ten pairs. 18 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 20. RCSP Study Guide II. Deployment Deployment Methods Physical In-path In a physical in-path deployment, the Steelhead appliance is physically in the direct path network traffic will take between clients and servers. The clients and servers continue to see client and server IP addresses and the Steelhead appliance bridges unoptimized traffic from its LAN facing side to its WAN facing side (and vice versa). Physical in-path configurations are suitable for any location where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. It is generally one of the simplest deployment options and among the easiest to maintain. Logical In-path In a logical in-path deployment, the Steelhead appliance is logically in the path between clients and servers. In a logical in-path deployment, clients and servers continue to see client and server IP addresses. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server. Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication Protocol (WCCP), and Policy-based Routing (PBR). Server-Side Out-of-Path A server-side out-of-path deployment is a network configuration in which the Steelhead appliance is not in the direct or logical path between the client and the server. Instead, the server- side Steelhead appliance is connected through the Primary interface and listens on port 7810 to connections coming from client-side Steelhead appliances. In an out-of-path deployment, the Steelhead appliance acts as a proxy and does not perform NAT of the client’s IP address as with in-path deployments (to allow the server to see the original client IP address), but will instead source NAT to the Primary interface address on the Steelhead appliance that is in server-side out-of-path. A server-side out-of-path configuration is suitable for data center locations when physical in-path or logical in-path configurations are not possible. With server-side out-of-path, client IP visibility is no longer available to the server (due to the NAT) and optimization initiated from the server side is not possible (since there is no redirection of the outbound connection’s packets to the Steelhead appliance). Physical Device Cabling Steelhead appliances have multiple physical and virtual interfaces. The Primary interface is typically used for management purposes, data store synchronization (if applicable), and for server-side out-of-path configurations. The Primary interface can be assigned an IP address and connected to a switch. You would use a straight-through cable for this configuration. The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a logical L3 interface is created. This is the “In-path” interface and it is designated a name on a per slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN ports respectively. Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and WAN1_1. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 19
  • 21. RCSP Study Guide For a physical in-path deployment, when connecting the LAN and WAN interface to the network, both of them are to be treated as a router. When connecting to a router, host, or firewall, a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover), however when using the wrong cables you run the risk of breaking the connection between the components the Steelhead appliances placed in-between, especially in bypass. These components may not support auto-MDIX. For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface does not need to be connected and will be shut down automatically as soon as the virtual in-path option is enabled in the Steelhead appliances configuration. For server-side out-of-path deployments only the Primary interface needs to be connected. In-path In-path Networks Physical in-path configurations are suitable for locations where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. The Steelhead appliance can be physically connected to access both ports and trunks. When the Steelhead appliance is placed on a trunk, the In-path interface has to be able to tag its traffic with the correct VLAN number. The supported trunking protocol is 802.1q (“Dot1Q”). A tag can be assigned via the GUI or the CLI. The CLI command for this is: HOSTNAME (config) # in-path interface inpathx_x vlan <id> Inter-Steelhead appliance traffic will use this VLAN (except in Full Transparent connections as explained below). There are several variations of the in-path deployment. Steelhead appliances could be placed in series to be redundant. Peering rules based on a peer IP address will have to be applied to both Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus multiple in-path IP addresses, all addresses will have to be defined to avoid peering. A serial cluster is a failover design that can be used to mitigate the risk of possible network instabilities and outages caused by a single Steelhead appliance failure (typically caused by excessive bandwidth as there is no longer data reduction occurring). When the maximum number of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept the new connections, if it has not reached its maximum number of connections. The in-path peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not to intercept connections between themselves. Appliances in a failover deployment process the peering rules you specify in a spill-over fashion. A keepalive method is used between two Steelhead appliances to monitor each others status and set a master and backup state for both Steelhead appliances. It is recommended to assign the LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm performance in case of a failure. In case the Steelhead appliances are deployed in parallel of each other, measures need to be taken to avoid asymmetrical traffic from being passed through without optimization. This usually occurs when two or more routing points in the network exist where traffic is spread over the links simultaneously. Connection Forwarding can be used to exchange flow information between 20 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 22. RCSP Study Guide the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be bundled together. WAN Visibility Modes WAN visibility pertains to how packets traversing the WAN are addressed. RiOS v5.0 offers three types of WAN visibility modes: correct addressing, port transparency, and full address transparency. You configure WAN visibility on the client-side Steelhead appliance (where the connection is initiated). The server-side Steelhead appliance must also support multiple WAN visibility (RiOS v5.0 or later). Correct Addressing Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting. This is “correct” as the devices which are communicating (the TCP endpoints) are the Steelhead appliances, so their IP addresses/ports are reflected in the connection. Port Transparency Port address transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules written in terms of client and network addresses, port transparency enables your routers to use existing rules to classify the traffic without any changes. Port transparency enables network analyzers deployed within the WAN (between the Steelhead appliances) to monitor network activity and to capture statistics for reporting by inspecting traffic according to its original TCP port number. Port transparency does not require dedicated port configurations on your Steelhead appliances. NOTE: Port transparency only provides server port visibility. It does not provide client and server IP address visibility, nor does it provide client port visibility. Full Transparency Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent to enabling full address transparency. For details, see the Steelhead Appliance Deployment Guide. However, if you must see your client or server IP addresses across the WAN, full transparency is your only configuration option. Out-of-Band (OOB) Splice What is the OOB Splice? An OOB splice is an independent, separate TCP connection made on the first connection between two peer Steelhead appliances used to transfer version, licensing and other OOB data between peer Steelhead appliances. An OOB connection must exist between two peers for © 2007-2009 Riverbed Technology, Inc. All rights reserved. 21
  • 23. RCSP Study Guide connections between these peers to be optimized. If the OOB splice dies all optimized connections on the peer Steelhead appliances will be terminated. The OOB connection is a single connection existing between two Steelhead appliances regardless of the direction of flow. So if you open one or more connections in one direction, then initiate a connection from the other direction, there will still be only one connection for the OOB splice. This connection is made on the first connection between two peer Steelhead appliances using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any concern except in full transparency deployments. Case Study In the example below, the Client is trying to establish connection to Server-1: SFE-2 Server-2 10.3.0.2 10.3.0.10 Server-1 10.2.0.10 10.3.0.1 10.1.0.1 1.1.1.1 2.2.2.2 10.2.0.1 WAN Client CFE-1 FW-1 FW-2 SFE-1 10.1.0.10 10.1.0.2 10.2.0.2 Issue 1: After establishing inner connection, the Client will try to establish an OOB connection to the Server-B. It will address it by the IP address reported by Steelhead (SFE-1) which is in probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x addresses are invalid outside of the firewall (FW-2). Resolution 1: In the above example, there is one combination of address and port (IP:port) we know about, the connection the client is destined for which is Server-1. The client should be able to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create a transparent OOB connection from the Client to Server-1 if the corresponding inner connection is transparent. How to Configure There are three options to address the problem of the OOB splice connection established mentioned in Issue 1 above. In a default configuration the out-of-band connection uses the IP addresses of the client-side Steelhead and server-side Steelhead. This is known as correct addressing and is our default behavior. However, this configuration will fail in the network topology described above but works for the majority of networks. The command below is the default setting in a Steelhead appliance’s configuration. in-path peering oobtransparency mode none In the network topology discussed in Issue 1, the default configuration does not work. There are two oobtransparency modes that may work in establishing the peer connections; destination and full. When destination mode is used, the client uses the first server IP and port pair to go through the Steelhead appliance with which to connect to the server-side Steelhead appliance and the client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To change to this configuration use the following CLI command: 22 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 24. RCSP Study Guide in-path peering oobtransparency mode destination In oobtransparency full mode, the IP of the first client is used and a pre-configured on the client- side Steelhead appliance to use port 708. The destination IP and port are the same as in destination mode, i.e., that of the server. This is the recommended configuration when VLAN transparency is required. To change to this configuration use the following CLI command: in-path peering oobtransparency mode full To change the default port used the by the client-side Steelhead appliance when oobtransparency mode full is configured, use the following CLI command: in-path peering oobtransparency port It is important to note that these oobtransparency options are only used with full transparency. If the first inner-connection to a Steelhead was not transparent, the OOB will always use correct addressing. Virtual In-path Introduction to Virtual In-path Deployments In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server. Redirection mechanisms: • Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you have multiple Steelhead appliances in your network to manage large bandwidth requirements. • PBR (Policy-Based Routing). PBR enables you to redirect traffic to a Steelhead appliance that is configured as virtual in-path device. PBR allows you to define policies to redirect packets instead of relying on routing protocols. You define policies to redirect traffic to the Steelhead appliance and policies to avoid loop-back. • WCCP (Web Cache Communication Protocol). WCCP was originally implemented on Cisco routers, multi-layer switches, and web caches to redirect HTTP requests to local web caches (version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of connection from multiple routers or web caches and different ports. Policy-Based Routing (PBR) Introduction to PBR PBR is a router configuration that allows you to define policies to route packets instead of relying on routing protocols. It is enabled on an interface basis and packets coming into a PBR- enabled interface are checked to see if they match the defined policies. If they do match, the packets are routed according to the rule defined for the policy. If they do not match, packets are routed based on the usual routing table. The rules can redirect the packets to a specific IP address. To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common best practice is to place the Steelhead appliance on a separate subnet. One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a © 2007-2009 Riverbed Technology, Inc. All rights reserved. 23
  • 25. RCSP Study Guide way of tracking whether the PBR next hop is available. You can enable this tracking feature in a route map with the following Cisco router command: set ip next-hop verify-availability With this command, PBR attempts to verify the availability of the next hop using information from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR checks availability in the following manner: 1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see if the IP address of the next hop appears to be available. If so, it sends an Address Resolution Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next hop (the Steelhead appliance). 2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains answers from the ARP request for the next hop IP address. If the ARP request fails to obtain an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer uses the route map to send traffic. This verification provides a failover mechanism. In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple Tracking Options. In addition to the old method of using CDP information, it allows methods such as HTTP and ping to be used to determine whether the PBR next hop is available. Using CDP allows you to run with older IOS 12.x versions. WCCP Deployments Introduction to WCCP The WCCP protocol is a stateful language that the router and Steelhead appliance can use to redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of connection parameters will all have to be communicated throughout the cluster that the Steelhead appliance and router form upon successful negotiation. The protocol has four messages to encompass all of the above functions: • HERE_I_AM. Sent by Steelhead appliances to announce themselves. • I_SEE_YOU. Sent by WCCP enabled routers to respond to announcements. • REDIRECT_ASSIGN. Sent by the designated Steelhead appliance to determine flow distribution. • REMOVAL_QUERY. Sent by router to check a Steelhead appliance after missed HERE_I_AM messages. When you configure WCCP on a Steelhead appliance: • Routers and Steelhead appliances are added to the same service group. • Steelhead appliances announce themselves to the routers. • Routers respond with their view of the service group. • One Steelhead will be the designated CE (caching engine) and tells the routers how to redirect traffic among the Steelhead appliances in the service group. How Steelhead Appliances Communicate with Routers Steelhead appliances can use one of the following methods to communicate with routers: • Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If additional routers are added to the service group, they must be added on each Steelhead appliance. • Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional routers are added, you do not need to add or change configuration settings on the Steelhead appliances. 24 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 26. RCSP Study Guide Redirection By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an ACL that is configured on the router to select the traffic that will be redirected. Traffic is redirected using one of the following schemes: • GRE (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet with the Steelhead appliance IP address configured as the destination. This scheme is applicable to any network. • L2 (Layer 2). Each packet MAC address is rewritten with a Steelhead appliance MAC address. This scheme is possible only if the Steelhead appliance is connected to a router at Layer 2. • Either. The either value uses L2 first—if Layer 2 is not supported, GRE is used. This is the default setting. You can configure your Steelhead appliance to not encapsulate return packets. This allows your WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send gre- return packets, but to actually send l2-return packets. This configuration is optional but recommended when connected at L2 directly. The command to override WCCP packet return negotiation is wccp l2-return enable. Be sure the network design permits this. Load Balancing and Failover WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to distribute traffic for a Service Group across the member Steelhead appliances. It is the responsibility of the Service Group's designated Steelhead appliance to assign each router's Redirection Hash Table. The designated Steelhead appliance uses a WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables. This message is generated following a change in Service Group membership and is sent to the same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages. A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not received within five HERE_I_AM_T seconds of a Service Group membership change. The HASH algorithm can use several different input fields to come up with an 8 bit output (which is the bucket value). Default input fields are source and destination IP address of the packet that is redirected. Source and destination TCP port or any combination can be used. The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the hashing algorithm determines which flow is redirected to which Steelhead appliance. The default weight is based on the Steelhead appliance model number. The weight is heavier for models that support more connections. You can modify the default weight if desired. With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active Steelhead appliance fails. Assignment and Redirection Methods The assignment method refers to how a router chooses which Steelhead appliance in a WCCP service group to redirect packets to. There are two assignment methods: the Hash assignment method and the Mask assignment method. Steelhead appliances support both the Hash assignment and Mask assignment methods. HASH © 2007-2009 Riverbed Technology, Inc. All rights reserved. 25
  • 27. RCSP Study Guide Redirection using Hash assignment is a two-stage process. In the first stage a primary key is formed from the packet which is defined by the Service Group and is hashed to yield an index. This index number will then be placed into a Redirection Hash Table. In the Redirection Hash Table a packet has either an unflagged web-cache, unassigned bucket, or a flagged packet. In the event the packet has an unflagged web-cache, the packet is redirected to that web-cache. If the bucket is unassigned the packet is forwarded normally. However, if the bucket is flagged indicating a secondary hash then a secondary key is formed (as defined by the Service Group description). This key is hashed to yield an index number which in turn is placed into the Redirection Hash Table. If this secondary entry contains a web-cache index then the packet is directed to that web-cache. If the entry is unassigned the packet is forwarded normally. MASK The first phase of Mask assignment is defining the mask itself. The mask can be up to seven bits and can be applied to the SRC TCP port, DST TCP port, source IP address or DST IP address or a combination of the four attributes but may not exceed seven bits. Depending on the amount of bits selected different number of buckets are created and assigned to the different Steelhead appliances in the service group. As traffic traverses the router a bitwise AND operation is performed between the mask and the IP address/TCP port depending on the mask defined. The traffic is assigned to the different buckets based on the results of the AND operation. Mask IP address/TCP port pairs are processed in an order they are received and in turn are compared to the seven bits. From Internet-Draft WCCP version 2 (http://www.wrec.org/Drafts/draft-wilson-wrec-wccp-v2- 00.txt ): Note that in all of the mask fields of this element a zero means "Don't care”. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Address Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination Address Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Port Mask | Destination Port Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ • Source Address Mask. The 32-bit mask to be applied to the source IP address of the packet. • Destination Address Mask. The 32-bit mask to be applied to the destination IP address of the packet. • Source Port Mask. The 16-bit mask to be applied to the TCP/UDP source port field of the packet. • Destination Port Mask. The 16-bit mask to be applied to the TCP/UDP destination port field of the packet. It may not be obvious for the details here but there is a priority bit order when using Mask. The above diagram reads from most significant to least significant bottom left to top. In other words, the priority bits will be source port, destination port, destination address, and source address. This is helpful in knowing in the event of troubleshooting which bucket a specific resource is allocated. 26 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 28. RCSP Study Guide For more information regarding Hash or Mask assignment, refer to the Steelhead Appliance Deployment Guide and the whitepaper “WCCP Mask Assignment” provided on the Riverbed Partner Portal and/or Riverbed Technical Support site. Advanced WCCP Configuration Using Multicast Groups If you add multiple routers and Steelhead appliances to a service group, you can configure them to exchange WCCP protocol messages through a multicast group. Configuring a multicast group is advantageous because if a new router is added, it does not need to be explicitly added on each Steelhead appliance. Multicast addresses must be between 224.0.0.0 and 239.255.255.255. Configuring Multicast Groups on the Router On the router, at the system prompt, enter the following set of commands: Router> enable Router# configure terminal Router(config)# ip wccp 90 group-address 224.0.0.3 Router(config)# interface fastEthernet 0/0 Router(config-if)# ip wccp 90 redirect in Router(config-if)# ip wccp 90 group-listen Router(config-if)# end Router# write memory NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255. Configuring Multicast Groups on the Steelhead Appliance On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands: WCCP Steelhead > enable WCCP Steelhead # configure terminal WCCP Steelhead (config) # wccp enable WCCP Steelhead (config) # wccp mcast-ttl 10 WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3 WCCP Steelhead (config) # write memory WCCP Steelhead (config) # exit Limiting Redirection by TCP Port By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to tell the router to redirect only certain TCP source or destination ports. You can specify up to a maximum of seven ports per service group. Using Access Lists for Specific Traffic Redirection If redirection is based on traffic characteristics other than ports, you can use ACLs on the router to define what traffic is redirected. ACL considerations: • ACLs are processed in order, from top to bottom. As soon as a particular packet matches a statement, it is processed according to that statement and the packet is not evaluated against subsequent statements. Therefore, the order of your access-list statements is very important. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 27
  • 29. RCSP Study Guide • If no port information is explicitly defined, all ports are assumed. • By default all lists include an implied deny all entry at the end, which ensures that traffic that is not explicitly included is denied. You cannot change or delete this implied entry. Access Lists: Best Practice To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the router. Verifying and Troubleshooting WCCP Configuration Checking the Router Configuration On the router, at the system prompt, enter the following set of commands: Router>en Router#show ip wccp Router#show ip wccp 90 detail Router#show ip wccp 90 view Verifying WCCP Configuration on an Interface On the router, at the system prompt, enter the following set of commands: Router>en Router#show ip interface Look for WCCP status messages near the end of the output. You can trace WCCP packets and events on the router. Checking the Access List Configuration On the router, at the system prompt, enter the following set of commands: Router>en Router#show access-lists <access_list_number> Tracing WCCP Packets and Events on the Router On the router, at the system prompt, enter the following set of commands: Router>en Router#debug ip wccp events WCCP events debugging is on Router#debug ip wccp packets WCCP packet info debugging is on Router#term mon Server-Side Out-of-Path Deployments Out-of-path Networks An out-of-path deployment is a network configuration in which the Steelhead appliance is not in the direct physical or logical path between the client and the server. In an out-of-path deployment, the Steelhead appliance acts as a proxy. An out-of-path configuration is suitable for data center locations where physical in-path or virtual in-path configurations are not possible. 28 © 2007-2009 Riverbed Technology, Inc. All rights reserved.
  • 30. RCSP Study Guide In an out-of-path deployment, the client-side Steelhead appliance is configured as an in-path device, and the server-side Steelhead appliance is configured as an out-of-path device. The command to enable server-side out-of-path is: HOSTNAME (config) # out-of-path enable LAN I/F WAN I/F WAN Client-side PRI IP SRC=S-SH Steelhead Fixed-target Rule Server-side Steelhead A fixed-target rule is applied on the client-side Steelhead appliance to make sure the TCP session is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for incoming connections from a client-side Steelhead appliance. The Steelhead appliance can perform NAT. The server will see the IP address of the Steelhead appliance as the source of the connection so the packets are returned to the Steelhead appliance instead of the client. This is necessary to make sure that the bidirectional traffic is seen by the Steelhead appliance. Also keep in mind that optimization will only occur when the TCP connection is initiated by the client. Out-of-Path, Failover Deployment An out-of-path, failover deployment serves networks where an in-path deployment is not an option. This deployment is cost effective, simple to manage, and provides redundancy. When both Steelhead appliances are functioning properly, the connections traverse the master appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup Steelhead appliance. When the master Steelhead appliance is restored, the next connection traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is passed through unoptimized to the server. The way to do this is to specify multiple target appliances in the fixed-target in-path rule on the client-side Steelhead appliance. © 2007-2009 Riverbed Technology, Inc. All rights reserved. 29
  • 31. RCSP Study Guide Data Center LAN Switch Server WAN Router Steelhead A Steelhead B Hybrid Mode: In-Path and Server-Side Out-of-Path Deployment A hybrid mode deployment serves offices with one WAN routing point and users, and where the Steelhead appliance must be referenced from remote sites as an out-of-path device (for example, to avoid mistaken auto-discovery or to bypass intermediary Steelhead appliances). The following figure illustrates the client-side of the network where the Steelhead appliance is configured as both an in-path and server-side out-of-path device. Steelhead Firewall/VPN Switch WAN PRI DMZ Client FTP Server Web Server In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical auto- discovery process to optimize any data going to or coming from the clients shown. If however, a remote user would like to get optimization to the DMZ shown above, the standard auto- discovery process would not function properly since the packet flow would prevent the auto- discovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed-target rule matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance for optimization on the return path. Asymmetric Route Detection Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry within the network. Asymmetry is detected by the client-side Steelhead appliances. Once detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the TCP connections to continue to work. The first TCP connection for a pair of addresses might be 30 © 2007-2009 Riverbed Technology, Inc. All rights reserved.