3. Tecnologías estratégicas para 2010
Las 10 tecnologías estratégicas mas importantes de Gartner para 2010,
definidas como “de potencial impacto considerable en la empresa en los
próximos tres años”:
Virtualizacion.
Cloud computing.
Servidores (Más allá de Blades).
Arquitecturas orientadas a la Web.
Aplicaciones Web híbridas empresariales.
Sistemas especializados.
Software / Redes Sociales.
Comunicaciones unificadas.
Inteligencia de negocios.
TI Ecológica (“Green IT”).
Business and Technology Working as One
4. Cloud Computing (Definiciones)
Wikipedia:
“Cloud computing es un paradigma que permite ofrecer servicios de
computación a través de Internet”
Russ Daniels de HP:
“Escalado horizontal, control de recursos en grado fino, autoservicios, coste
variable según uso”
ServePath:
“The use of a 3rd party service to perform computing needs on a publicly
accessible IP basis. Cloud computing services are usually performed in
consolidated Data Centers to keep costs low while improving overall
utilization”
Elementos habituales en todas las definiciones:
Acceso a través de Internet (la “nube”)
Virtualización
Escalabilidad
Coste por uso
Business and Technology Working as One
5. Cloud Computing: Conceptos
Definimos al “Cloud Computing” como un estilo de computación donde los
recursos de IT son:
Brindados a los clientes como un servicio utilizando tecnologías de
Internet.
Masivamente escalables.
De alcance global.
Distribuibles dinámicamente, “a demanda” en cantidad y calidad medibles.
Asignados Just in Time
Servicios a múltiples clientes que comparten los mismo recursos.
Se paga solo por el servicio que se utiliza.
La virtualización es el fundamento para avanzar hacia los servicios
del cloud computing
Business and Technology Working as One
6. SaaS, PaaS, IaaS ?!!? Los “aaS”
SaaS (Software as a Service): Significa una sóla instancia del software
que corre en la infraestructura del proveedor y sirve a múltiples
organizaciones de clientes. Ejemplo: Salesforce.com
PaaS (Plataform as a Service): Es la encapsulación de una abstracción
de un ambiente de desarrollo. Ejemplo: rackspacecloud.com.
IaaS (Infraestructura as a Service): Es un medio de entrega de
almacenamiento y capacidades de cómputo como servicios
estandarizados en la red. Ejemplo: Amazon EC2
Business and Technology Working as One
8. Las empresas hacia el cloud computing
“Las empresas dispondrán de una infraestructura dedicada para
algunos propósitos y consumirán servicios On Demand
obtenidos de la nube para otros.”
Business and Technology Working as One
9. Algunos Beneficios del Cloud Computing
Las infraestructuras de Cloud Computing proporcionan mayor capacidad de
adaptación, recuperación de desastres y reducción al mínimo de los
tiempos de inactividad.
Se redirigen los costos de CAPEX y problemas de capacidad al cloud
provider.
La posibilidad de adquirir recursos bajo demanda.
Convertir costos fijos en variables.
Permite mayor flexibilidad y escalabilidad en el crecimiento.
Business and Technology Working as One
10. Virtualización
La Virtualización consiste en la abstracción de los recursos físicos
existentes en un equipo informático para poder correr sobre el
mismo equipos virtuales.
Cada uno de estos equipos virtuales ve un servidor completo,
interactuando con el mismo a través de la tecnología de
virtualización.
Business and Technology Working as One
12. Beneficios de la virtualización
Reducción de los esfuerzos de administración:
Menores costos operacionales
Menos servers para administrar.
Rapid deployment
Ahora 1-6 Semanas (Compra, setup, software, test).
Con la virtualización se puede reducir a horas.
Reducción en los costos de infraestructura y servidores.
Mejora en la utilización de los recursos.
Incrementa y mejora la disponibilidad.
Herramientas para mejorar la seguridad.
Business and Technology Working as One
13. Next Generation Data Center
A medida que la infraestructura IT se vuelve más compleja, los requisitos de
IT cambian de gerenciar operaciones técnicas a operaciones de servicios.
Esto plantea la necesidad de transformación del DC.
Cuatro fuerzas evolutivas La nueva generación de Data Centers
Están dando forma al NGDC será…
• Una infraestructura provisionada
dinámicamente por medio del uso de
capacidades automatizadas soportando el
proceso de negocio de la compañía.
• Servicios de tecnología construidos sobre
infraestructura virtual.
• Procesos estandarizados.
• Arquitecturas tecnológicas que permitan
consolidar recursos de IT.
Business and Technology Working as One
14. Data Center and App Delivery Evolution
Data Center 1.0 Data Center 2.0 Data Center 3.0
Mainframe Client-Server and Service Oriented and
Distributed Computing Web 2.0 Based
Consolidate, Virtualize, Automate
Front End
Server Load
Processor
Balancer
Cisco Application
Control Engine (ACE)
CENTRALIZED DECENTRALIZED VIRTUALIZED
Application Architecture Evolution
Mainframe Minicomputer/PC Client Server ASP/ SaaS Cloud
Business and Technology Working as One
15. Your Application Delivery Reality Today
- under increasing pressures
Collaboration Empowered User SLA Metrics Global Availability Reg. Compliance
Business
Requirements
Customer Partner
Data Center
Branch Teleworker
Delivery Challenges
TCO and Service App Availability Shift to SOA / App Security
Delivery and Performance Web 2.0 Threats
Business and Technology Working as One
16. Introducing ACE In The Virtual DC (AVDC)
Solution That Addresses DC 3.0 App Delivery Challenges
“AVDC improves the integration
between ACE, Nexus 7000, UCS and
VMware products”
Enhancements focus on the following: ACE Module
& Appliance
VM Intelligence – The ability to
monitor and react to VM adds, move and
deletes
Automation – Automatic service
Unified Computing Nexus 7000
deployment and removal
Performance and Scale –
Provisioning app delivery infrastructures
to meet increased demands and “right
VM
size” resources Intelligence
Operational Simplification –
Streamline provisioning & monitoring Operational
Simplification
Initial phase focuses on provisioning
simplification, advanced reporting &
Performance Automation
ACE/VMware vCenter integration and Scale
Business and Technology Working as One
17. CSS – CSM to ACE:
CSS/CSM ACE
Capabilities Capabilities
Family Family
Basic load balancer Virtualized
with SSL offload application switch
500 Mbps to 6 Gbps 1 Gbps to 2 Gbps
Application Control Engine
Appliance (ACE 4710)
Content Services Switch
(CSS) Appliances –
CSS11501, 11503, 11506
Virtualized
Basic load balancer application switch
with SSL offload
4 Gbps to 16 Gbps
4 Gbps max
Content Switching Module Application Control Engine
(CSM) for Cat6K Module for Cat6K
Business and Technology Working as One
18. ACE Portfolio Summary
Comprehensive Application Delivery Solution
Application Switching Multi-Module
(64 Gbps)
ACE GSS
20K DNS RPS
Module
+
ACE Module
Appliance 4-16 Gbps
“One-Click”
Migration
ACE 4710 Tools
ACE XML
0.5-4 Gbps
ANM 3.0 Gateway
Manager
Global Products and Tools
Business and Technology Working as One
19. Cisco ACE Solutions
Application Security
Virtualization / Isolation Internet
Lower TCO (OPEX Protects applications and
/CAPEX) server farms from attacks
Virtualized
Bus Continuity / IT Data Center Application Performance
Agility
Virtual Partition 1 Virtual Partition 2 Virtual Partition 3
Web Packaged Custom
Applications Applications Applications
Faster response time
Improved application better productivity
provisioning and
scalability
Business and Technology Working as One
20. Cisco ACE Solution : Virtualization
- lower TCO (OPEX / CPEX) with multi-tier consolidation
Internet
Internet
Internet
• Infrastructure Simplification
• Less Device Sprawl
• Virtual ACE for Different Tiers
• Additional Scalability Front-End
Web
Servers
• Faster Provisioning Firewalls
• Simplified Management
• Improved Security
• Cost Effective No Additional HW
Application
Servers
Virtual Partition 1 Virtual Partition 2 Virtual Partition 3
Web Application Database
Servers Servers Servers
Database
Servers
After : single Cisco ACE
Before: many devices Business and Technology Working as One
21. Cisco ACE Solution : App Isolation
- complete isolation of applications or departments
Internet
Internet Internet
• Virtual device for each app environment
• Complete isolation of applications
• Committed resource allocation
• Infrastructure simplification
• Improved application security
Virtual Device 1 Virtual Device 2
Virtual Device 3
isolate with virtual partitions instead of physical devices
Business and Technology Working as One
22. Cisco ACE Solution : IT Agility
- enhance IT agility , improved workflow
Internet
Internet Re
qu
es
tf
Application or
ch
team an Traditional
ge
Load Balancer
• Decrease operational overheads Network Config
administrators changes
• Customizable role-based administration e
ng
• App roll outs ha
o rc
Virtualized tf
• Configuration changes es
Data Center qu
• Patch updates Re
• HW maintenance Server
maintenance
Virtual Partition 1 Virtual Partition 2 Virtual Partition 3 team
Web Packaged Custom
Applications Applications Applications
Access Control (RBA
Improved workflow w
Network role
Application role
Server role
faster application provisioning and
better scalability
Business and Technology Working as One
C
23. Cisco ACE Solution : App Performance
- accelerating application performance
Internet
External Web
Internet Browsers
• Advanced application acceleration
• Data encoding and compression
• Smart image optimization
• Dynamic browser caching
Virtualized • Server offloads
Data Center
Virtual Partition 1 Virtual Partition 2 Virtual Partition 3 faster application response and
Web Packaged Custom improved productivity
Applications Applications Applications
Business and Technology Working as One
24. Cisco ACE Solution : App Security
- increase application security
Internet
External Web
Internet Browsers
BLOCKED
• Enforces secure use of applications
• Performs check on all data
• Monitors all user sessions
Virtualized • Blocks any HTTP attacks
Data Center
Virtual Partition 1 Virtual Partition 2 Virtual Partition 3
Web Packaged Custom
Applications Applications Applications
protects applications and server
farms from external attacks
Business and Technology Working as One
26. ACE Module Software Key Features
Available Fast Secure
Load Balancing Support Protocol Inspection
SSL Enhancements SIP
SIP
Extended RTSP
Session ID Stickiness ILS/LDAP
Radius Client Authentication SCCP (Skinny)
RDP SSL Queue Delay
Generic Protocol Parsing ACL Improvements
Object Grouping
Enhanced Predictors
Fast DNS LB
Adaptive Algorithms UDP “booster” DoS Protection
Least Loaded SYN Cookie per Interface
Least Bandwidth
Management
ANM 1.2 Rate-Limiting
General SLB Connection-rate
XML Tagged Config
KAL-AP Bandwidth-rate
Real-time “TCP Dump”
HTTP Header Rewrite Mgmt Traffic Protection
Partial Serverfarm Failover HTTP Firewall Features
HA Sync Improvements Inspect HTTP POST Body
Application-based Probes
SNMP-based Probes Source NAT Changes Inspect HTTP “Secondary
UDP Fast Age cookies”
Business and Technology Working as One
27. The Benefits of the ACE Architecture
Virtualization and Role-Based Access Control
Virtual devices guarantee application resources & performance
Virtual instead of physical devices to minimize device sprawl
Faster app rollouts, lower power and cooling requirements, less rack space
Forklift-Free Licensing
Software-controlled upgrades for key scale and performance categories
Investment protection and pay-as-you-grow
Price and Performance
ACE Module – Industry’s highest performing app switching platform: 4–64 Gbps
ACE Appliance – More capacity, advanced features, lower price
ACE: Next-Gen Architecture
Delivers Next-Gen Benefits
Business and Technology Working as One
28. Fork-Lift Free Upgrades:
ACE Pay-as-You-Grow Licensing
Upgrade
Compression
1 Gbps
Upgrade a de
g r th
Up Pa
SSL
Upgrade 500 Mbps
Virtual Devices
a de 7.5K TPS
g r th
Upgrade Up Pa 100 Mbps
5K TPS
Throughput
a de
gr th
Up Pa 20
de
1K TPS
a
gr t2 Gbps
p ah
U P 5
1 Gbps
Superior Investment Protection
Business and Technology Working as One
29. Simplified Migration:
Resources for New App Rollouts
Powerful testing, design guides, ISV validation
http://www.cisco.com/go/optimizemyapp
Business and Technology Working as One
30. ANM 3.2 Guided Setups
Simplifying Deployment Of Devices & Services
New Capabilities
Illustrations show concept being provisioned
Guide text provides useful provisioning
information
“Learn More” link for deeper functionality
understanding
Forms-based entries speeds user though
tasks
Benefits
Helps ensure successful initial deployment
Embedded information helps avoid configuration errors
To complete a deployment just follow the steps
ANM Reduces Complexity And Deployment Time While as One
Business and Technology Working
Improving Revenue Recognition
31. ANM 3.2 Summary Monitoring Dashboards
Quick Access To Core ACE Information
• At-A-Glance
knowledge of
application health
Device Info
• High-level License Status HA
situational Status
awareness Configuration &
Service
• Understanding of Summary
resource usage
Denied Virtual
• Early warning of Resources
future resource Graphical
needs Virtual
Resource Use
Summary Dashboards: Single Views for ACE Status
Business and Technology Working as One
32. AVDC Component - VMware vCenter 4.0
Centralized Management For Virtual Machines
Standalone Software product that
simplifies & automates management of
Virtual Data Centers
Provides centralized control and
visibility of a virtual infrastructure.
Extensible management platform with a
broad partner ecosystem
Benefits include increase IT productivity
and reduce operational costs
vCenter Integrates With UCS, Nexus And ACE Delivering A
Comprehensive VM Solution
Business and Technology Working as One
33. AVDC Phase 1: ANM vCenter Plug-In
Unified Management Tool For VM’s And ACE
Overview
ACE vCenter plug-in is a software
component that allows an ACE environment
to be configured and managed by vCenter
Key Components
vCenter, ANM, ACE Module and Appliance
Description
•Enables the association existing vCenter
ANM
VMs to existing ACE server farms
•Dashboard showing ACE and ANM server
health information inside of vCenter
•ANM is a proxy between the ACE
Module/Appliance and vCenter
•Leverages ANM reporting capabilities and
ACE MIB’s for monitoring information
Business and Technology Working as One
ACE ACE Module
34. Enabling a New Server - Traditional Method
Multiple Systems, No Integration
Traditional enablement requires two management systems
and coordination of two administrators
vCenter
Systems
Sysadmin uses VCenter to Administrator
enable new VM
ADC
Manager
SLB Admin creates new ADC
server in the server farm Administrator
Business and Technology Working as One
35. Enabling A New Server
Traditional Workflow
ACE Load Balancer Application servers
VM
Server Farm
A
ESX
Virtual Server VM Cluster
IP A
ADC Manager (ANM)
ADC vCenter Systems
Admin Admin
Inefficient Operations
•Multiple systems & administrators = High Operations Cost
•No shared config & monitoring data = Complex operations Working as One
Business and Technology
36. Enabling A New Server – AVDC Method
Simplified Process, Reduced OPEX
Using AVDC requires only the Sysadmin and one
management tool
Sysadmin uses VCenter to vCenter
enable new VM
Systems
Administrator
Sysadmin adds new server
to server farm with vCenter
Business and Technology Working as One
37. Enabling A New Server
AVDC Workflow
ACE Load Balancer Application servers
VM
Server Farm
A
ESX
VM Cluster
A
Virtual Server
IP
ANM Plug-in
ADC Manager (ANM 3.1) Systems
vCenter w/Plug-in Admin
Operational Efficiency
VMA
• Single admin & mgmt point = Lower Administrative CostTechnology Working as One
Business and
•Share config & monitoring data in vCenter = Simplified Operations
Now let’s take a step back and look at how Data Center and application delivery has evolved. <NOTE THIS IS A BUILD SLIDE> In the DC 1.0 Application delivery architectures was very much centralized around a mainframe, with terminal access. In it’s early days of SNA and BNA, application delivery called for Front End Processors (FEPs), which were dedicated small computers that handled communications processing for the mainframe by connecting communications lines on one end and the mainframe on the other. FEPs, transmits and receives messages, assembles and disassembles packets, and detects and corrects errors, thus offloading the task from the Mainframes and allowed precious CPU cycles to increase the performance of the applications. This was the early from of application delivery infrastructures. With the advent of the Internet and of specifically emergence of TCP/IP as a universal protocol, DC 1.0 transitioned from centralized Mainframe to decentralized Client Server model, DC 2.0 taking advantages of powerful mini-computers and punchy underutilized Unix or x86 servers, which allowed for applications to be more distributed. With emergence of multi-tier architectures and the WWW, a new type of device was required which, needed to load balance and distribute traffic efficiently among network servers so that no individual server is not overburdened, these devices were Server Load Balancers (SLBs). Besides load balancing SLBs also performed other functions to increase performance, availability and security of the application delivery like: Layer 4 to 7 switching, to direct application queries to the most appropriate server Firewall functions, to help ensure the integrity of the company’s data and provide application-specific security Off-loading of computationally intensive tasks, such as the processing of Secure Sockets Layer (SSL) traffic XML application and Web services switching and acceleration DC 2.0 and it’s SLBs also created applications silos that were spread around the data center. If you needed to scale your application delivery you simply added another server into the system or another SLB, thus DC 2.0 environments created number of critical problems for IT, like vast server sprawl, underutilized resources, application provisioning and management nightmares. With innovations in virtualization and emergence x86 hypervisors such as VMware’s ESX server, Xen and MS’s Hyper-V and Unix based virtualization solutions like HP’s VPARs, Sun’s LDOMs and IBM’s LPARs and a move to SOA and Web2.0 architectures DC 3.0 emerged and allowed DC 2.0 infrastructure to be consolidated, virtualized and automated, leading to a need for application delivery in a agile Virtualized application delivery infrastructures. With DC 3.0 virtualized data centers came shift to how applications are delivered today, in the form of Software-as-a-service (SaaS) and outsourcing of applications from Application Service Providers (ASPs). In this new environment an innovative type of device for application delivery was needed, that enabled IT to be more responsive to their line of business and to their user, hence the next generation of application delivery devices the Cisco ACE. Cisco ACE with it’s unique virtualization capabilities enables enterprises and service providers to accelerate and scale application deployments, reduce CAPEX and OPEX costs, simplify application delivery network architectures and allow faster provisioning of applications. So what’s driving the need to evolve your application delivery? It comes down to escalating costs that map to efficient utilization or resources put together against an ever-changing landscape of business requirements.
Let’s look at the reality of application delivery today and it is very clear that enterprise are facing an ever increasing business requirements to stay competitive in today's economical climate. <discuss each of the business requirements and ask audience what specific business drivers are that faced with> Business requirements calls for greater collaboration amongst it’s workforce, partners and customers, forcing organizations to deploy wide variety of collaboration applications and deploy new application delivery infrastructures to support them. Empowered Users due to advances in data communication, security protocols, emergence of remote workforce and adoption of Web 2.0/SOA, users are demanding access to business applications from any-where, any-time from any-devices it’s almost unthinkable today to not to be a smart-phone (BlackBerry or an iPhone) addict….access to key business applications in a secure manner drives business productivity. To achieve optimum return on investment from these applications, organizations must ensure also that applications are globally available , and scalable to meet the future needs, while meeting critical SLA metrics, to ensure business continuity. If a critical application is compromised, like email or your accounting system (on the last of day of the quarter) or a critical on-line trading application has power performance, results can very costly and extremely newsworthy. Another key business requirement is ever-stricter regulatory compliance, to ensure the safety of application data but also the integrity of the data too, hence every business is closely monitored and scrutinized and need to have an application delivery framework that meets the regulatory compliances is a must for organizations. While enterprises gear up to meet demands for greater collaboration, quicker access to applications and information and compliance -- they are being strained by issues relating to challenges of application delivery: . <discuss each of the challenges and ask audience what specific business drivers are that faced with> TCO and Service Delivery : new services require new physical deployments, new qualification cycles, and dedicated resources, which increase time to deployment and usually results in higher CAPEX and OPEX which effects the overall TCO. Application Availability and Performance: exponential growth in applications, sheer volume of transactions and ever increasing number of users leads to application response time delays, which results in poor performance and application availability, so there is a need for an application delivery infrastructure to be highly available, that can scale as demand increases and provides the level of performance that yields maximum productivity Application Security Threats : things have changed. Security often was a defensive response or an after thought, once the vulnerabilities is exposed or worst case, a business suffered down time or a loss. Organizations, need to adopt and deploy new security measures which are more proactive , enabling business continuity. Shift to SOA/Web 2.0 : Years of unplanned growth and ad hoc custom/in-house application deployment to meet urgent demands of the business has led to an aging application delivery architectures that cannot scale and which is characterized by infrastructure sprawl with an aging accumulation of silo's of applications that are poorly optimized and require greater effort and time to manage and provision. This aging application architecture is ill equipped to support the Web 2.0 model and the shift to service-oriented architecture (SOA), hence the challenges for many organizations is to not only deal in the short term with fixing their infrastructures to “get back to the baseline,” but also think ahead to how they will be able to take advantage of the new shift to SOA and Web 2.0 service model. Conclusion : To survive and thrive in today’s harsh economical climate the reality is that organizations are under increasing pressures to meet the business requirements enforced on them and at the same time meet the Its are facing the challenges of application delivery to sustain the business.
Cisco ACE solutions can be bucketized into following solutions to address your application delivery Virtualization / Isolation – for lowering your OPEX and CAPEX and providing better security and management of your applications Application Security – that protects applications and server farms from attacks Business Continuity / IT Agility – for improved application provisioning and scalability Application Performance – that enables faster response time , leading better all around productivity
Let’s look at how the virtualized architecture of the ACE can help you to lower your TCO (OPEX/CPEX). In this scenario, the customer has deployed the classic multi-tier architecture using separate load balancers and firewalls at each of the tiers, Web, application, and database as shown on the left. Using the virtualization, and firewalling capabilities of Cisco ACE, the three distinct (Web, App, DB) tiers with load balancer and firewall can be collapsed into a single physical device. Here, the Cisco ACE is partitioned (using the virtualization architecture) to take on the role of the load balancers in each tiers as well as to provide application layer security with SSL offload, stateful packet inspection, IP normalization, and server offload capabilities. This leads to simplification of your infrastructure and reduces the device sprawl since you now have a virtual device rather then physical in each of the tiers, thus providing additional scalability with faster provisioning. Furthermore, this solution simplifies management and improves security of your overall solutions. Importantly, this is cost effective since no additional HW needs to be purchased.
Complete Isolation of Applications With Cisco ACE, administrators have the flexibility to allocate resources to virtual devices in any way they see fit. For example, one administrator may want to allocate a virtual device for every application deployed. Another administrator may want to allocate a virtual device for each department’s use, even for multiple applications. A service provider administrator may want to allocate a virtual device for each customer. Regardless of how resources are allocated, Cisco ACE’s virtual devices are completely isolated from each other. Configurations in one virtual device do not affect configurations in other virtual devices. As a result, virtual partitioning provides a novel way of protecting a set of services configured in several virtual devices from accidental mistakes, or from malicious configurations, made in another virtual device. A configuration failure on Cisco ACE is limited to the scope of the virtual device in which it was created. A failure in one virtual device has no effect on other virtual devices in the Cisco ACE, maximizing uptime for critical applications, especially when Cisco ACE is deployed in a redundant high-availability configuration. Note that with competitors’ offerings, customers need to purchase and deploy additional physical units to achieve this level of configuration isolation for applications, departments, and customers. By isolating with virtual partitions instead of physical devices Cisco ACE solution provides complete isolation of applications such as an Oracle, SAP or Microsoft, simplifies the infrastructure due to less devices and improves overall application security for maximum availability.
Improved Workflow With traditional application delivery solutions, application deployment often proceeds slowly because of the need for complex workflow coordination. For a new application to be deployed or for an existing application to be tested or upgraded, the application group must work with the network administrator to coordinate the desired configuration changes on the application delivery device, typically a load balancer, a process particularly problematic for the network administrator, who is responsible for helping ensure that any change to the configuration does not impact existing other services. With ACE’s Virtual partitioning architecture with role-based access control (RBAC) mitigates this concern by enabling the network administrator to create an isolated configuration domain for the application group. By assigning configuration privileges within a single isolated virtual device to the application group, the network administrator can stay out of the workflow and eliminate the risk of misconfiguration of existing applications enabled in other virtual devices. This improved workflow creates a self-service model in which the application group can independently test, upgrade, and deploy applications faster than ever before. With virtual partitions and the RBAC feature of the Cisco ACE, IT can roll out applications faster, can make configuration changes without effecting other services and can perform patch updates without having to bring the services down…hence leading to enhanced IT agility and improved workflow.
Often the challenge of application delivery is not just about overcoming network latency but the overall Application performance for improved productivity. Cisco ACE accelerates the application experience for all users, whether they are in the office or on the road. To enable optimal application performance for remote and traveling users, Cisco ACE uses a range of acceleration capabilities to improve application response times, reduce bandwidth, and improve protocol efficiency. These technologies, including hardware-based compression, delta encoding, and FlashForward that improve the performance and cuts response times by minimizing latency for any HTTP-based application,. . <MORE DETAILS> Delta encoding— Webpage caching is successful because many pages are static; subsequent requests can be satisfied from the cache instead of the server. However, dynamic resources and content force subsequent server requests for the original page. But when one can encode and deliver to the client just the differences between the cached original page and the updated new page, many cases can be handled by sending just a few bytes. This approach, called delta encoding, is a core technology of the Cisco ACE. It helps the client system dynamically construct new pages from cached pages by applying small deltas. This process is both automatic and transparent—no changes to browser clients, application servers or content are required. Smart image optimization —The Cisco ACE device compresses image files intelligently to optimize image quality, resulting in faster image download times, faster page renders, and more efficient bandwidth usage. Other schemes compress images uniformly, a policy that can severely degrade quality of some images while missing opportunities to compress other images further. Some images can be highly compressed, but others need to maintain their detail. For example, a JPG photo for an accident claim can be kept at the highest resolution, whereas a scanned insurance policy document can be highly compressed without compromising readability Dynamic browser caching —Many enterprise applications for customer relationship management (CRM) and for portals often mark some objects, such as images, JavaScript files, ActiveX control files, or binary files, as noncacheable. This practice can result in slow download performance, especially for remote users with limited bandwidth. Cisco Just-in-Time Object Evaluation technology on the Cisco ACE automatically tracks the freshness of each of these objects in real time. If a requested object has not changed, the client uses its cached version. The Cisco ACE delivers the object only when it has changed in that specific context. Server offloads - Many organizations are surprised at how much server power is needed to support Web capabilities and applications. Cisco incorporates a wide variety of server offload functions in ways that are pertinent and appropriate for enterprise IT deployments. The following features combine to reduce server cycles by up to 80 percent TCP connection multiplexing for offloading connection management Caching Adaptive and configurable dynamic caching Load-based dynamic caching Lazy-request evaluation SSL acceleration URL mapping Single sign-on (SSO) optimizations XML transformation Cisco ACE with advanced application acceleration, data encoding and compression, smart image optimization, dynamic browser caching and variety of server offloads results in a faster application response and overall improvement in productivity.
To increase application security, Cisco ACE provides fully integrated Layer 2 through Layer 7 security (not just application layer security) that protects applications farms from external attacks. Whereas intrusion prevention and intrusion detection systems protect Web servers, the Cisco ACE solution protects against vulnerabilities in Web based applications. What firewalls accomplish at the network level—denying all activities unless explicitly allowed—Cisco ACE accomplishes at the application level. A rules-based, policy-directed approach ensures that those automated requests to and from the application comply with policy and do not, for example, include a request to turn off the application. The Cisco ACE performs a state-full deep packet inspection of HTTP protocol to determine exactly what HTTP application traffic is attempting to enter the network. Deep packet inspection is a special case of application inspection, where the Cisco ACE examines the application payload of a packet or a traffic stream and makes decisions based on the content of the data. It also helps to determine whether the application protocol (in this case, HTTP) is behaving in an irregular manner. Cisco ACE also supports state-full deep packet inspection of FTP, DNS, ICMP, LDAP and SIP. In terms of flow of connections, Cisco have add the integration of SSL and security, without requiring additional hops or devices. This reduces latency, while still providing all the services, and it simplifies deployment, flows, and troubleshooting. The ACE application switch in conjunction with the ACE XML Gateway (AXG) offers a robust application security solution for HTTP and XML based traffic. The AXG also offers full Web Application Firewall (WAF) capability that protects applications and server farms from external attacks.