3. Primary Market Drivers
• The competition is only a click away in today's
web-facing world.
• Response times are critical to giving customers
a good experience and generating revenue.
• Customer sessions are becoming more critical.
• The cost of attracting new customers to your
web site for enrollment is significant.
• Losing the data that they have entered will likely
create a negative impression and result much
higher abandonment rates
6. What is a Portal Farm?
■ A series of identically configured, stand-alone portal instances
■ No managed cell, no clustering, no Deployment Manager – Just Stand-Alone
Application Servers runtimes!
■ Workload management handled using any load balancer
– HTTP Server plug-in can be used with manual configuration
■ Server instances treated as commodities
– Rip-n-replace
– Can more easily mix/match maintenance levels
■ Extremely simple to grow/shrink capacity based on demand
■ Particularly well suited for cloud-based deployments
WP
WP WP
WP WP
WP WP
WP WP
WP WP
WP WP
WP WP
WP WP
WP
6
7. Typical Customers who is using Portal Farms
• Banks
• HR services providers
• Companies that need continuous availability
• Companies that are using LPARs or virtual images and SANs to run their
Portal and want to simplify maintenance.
• Companies that do not wish to set up and maintain multiple Portal
clusters.
WAS Extended Deployment
• Keep it simple and make it work.
7
8. Portal Farming – What is Missing?
• Ultimate simplicity at the loss of some functionality
DB or grid-based session persistence and failover only*
No distributed cache management*
No distributed EJB usage
No synchronized configuration (without the aid of file system utilities)
No coordinated task scheduling
No cluster-scoped administrative actions
− Start/stop applications
− Can be replaced by “flexible administration” or scripting
• Customers need to understand these limitations before considering a
farm-based deployment
8
9. Why Choose Farming?
•Farming is a simple architecture with just a series
of identical stand-alone portal instances with load
Unique install Portal Farm
balanced by a HTTP plug-in or any load balancer Requests
•Server farms are effective way to build and maintain LB
a highly scalable, highly available server
environment
•Farms allows Dynamic Server Expansion and Node Node Node
contraction of size without complex cluster Portal1 Portal Portal1
configurations which are usually time consuming
•Sourcing additional servers using cloning or REL JCR REL JCR REL JCR
virtualization is very rapid. With WP7 Shared
configuration it is much more rapid and simple
•Client had a very tight maintenance window. CUS COM
Deployments on clusters, Synchronizing clusters is
stretching maintenance window
•Though administrative actions need to be
repeated on each server independently, this can be
achieved automation scripts or tools
• Customer understood the limitations of Farming –
like Distributed caching, EJB, cluster administration
etc.
10. Multi-Tenant Design Features
• Hosting Multiple clients on shared infrastructure
• All the customers are hosted on a huge infrastructure cloud
• Dynamic launching of clients enabled
• Clients are allowed to choose services from list of available services
• Provide complete client isolation so each client operate in its own SILO
•Resource sharing is enabled at various levels but not transparent to clients –
Shared resources are
• Hardware
• Server and JVM resource – CPU, Memory, Disk Space
• Portal instances
• Client identity (Branding) is handled by providing custom personalization at
application design
10
11. Client isolation and insulation – Conceptual model
• Every customer needs to be in his own virtual environment and completely isolated
• Insulated from information spill and Load fluctuations
• Is physical isolation a reasonable solution?
A Client A Client A Client A Client A Client A Client
SILO SILO SILO SILO SILO SILO
Gateway Gateway Gateway Gateway Gateway Gateway
Security Security Security Security Security Security
Web Tier Web Tier Web Tier Web Tier Web Tier Web Tier
Portal Portal Portal Portal Portal 57K Clients Portal
App Tier App Tier App Tier App Tier App Tier App Tier
DB Tier DB Tier DB Tier DB Tier DB Tier DB Tier
11
13. Elastic Caching minimizes the impact of
Transaction Overload
Web Server Tier App Server Tier Elastic Cache Back-end Systems
Database Tier
Improve Performance,
Scalability & Availability
Highly Scalable Web
Applications
Data-intensive
Applications
Extreme Performance
WebSphere
IBM HTTP Server DB2 UDB
Application Server
14. Innovative Elastic Caching Solutions
“Data Oriented”
Session management
Elastic DynaCache
DataPower XC10 Appliance Web side cache
• Drop-in cache solution
Worldwide cache
optimized and hardened
for data oriented scenarios Data buffer
• High density, low footprint eXtreme Scale
improves datacenter Event Processing
efficiency Petabyte analytics • Ultimate flexibility across a
broad range of caching
In-memory OLTP scenarios
• In-memory capabilities for
In-memory SOA
application oriented
“Application Oriented” scenarios
Elastic caching for linear scalability
High availability data replication
Simplified management, monitoring and administration
14
16. Applications using DynaCache
Each JVM has a private disk
based cache to support caches
much larger than possible with a
memory only conventional cache
2 tier cache: JVM has a small
local cache, then the disk file.
Cached content is redundant
across JVMs
16
17. News Portlet Deployment - Failure
!#*! DynaCache
W e lc o m e ,
WPS disk-offload
U ser!
DynaCache
WPS disk-offload
… too slow!
DynaCache
During a recent ‘News’ application promotion, the WPS disk-offload
customer response to the new portlet overwhelmed
the web-site. The web-site became painfully slow
under the significant load. The result, not a happy
customer… DynaCache
WPS disk-offload
18. Scalability: Off-loading Dynamic cache to WXS/XC10
Much larger cache capacity
WebSphere Portal JVMs run
more efficiently
– Lower local memory
requirements
– Faster start-up time
Improved consistency of
performance
– Improved cache and
environment stability
– High availability of cached
data
18
19. News Portlet Deployment - Success
Elastic cache
W e lc o m e ,
U ser!
WPS
W XS
WPS
During a recent ‘News’ application promotion, the
WPS
customer response to the new portlet was very high. With WXS DynaCache Grid
However, with addition of an elastic cache the web-site configured, disk-offload is no
was able to handle the significant increase in load. The longer required
customers did not perceive any slow down of the web-
site. The result, happy customers and a successful
content promotion… WPS
20. Fast start-up when adding more capacity – on the fly
Elastic cache
WPS
W e lc o m e ,
U ser!
WPS
W XS
WPS
New WebSphere Portal servers can be
brought on-line quickly to meet increased
WPS
capacity needs. When start-up is
complete, the new server has immediate
access to a warm cache provided by
eXtreme Scale.
WPS
New Server
21. Maintain consistent user experience during site maintenance
Elastic cache
WPS
W e lc o m e ,
U ser!
WPS
W XS
WPS
If a WebSphere Portal server needs to be
restarted after applying an iFix, eXtreme
WPS
Scale can provide up to 54%
improvement in time to reach steady-state
WPS Down for maintenance
22. Scenario Details
Two Portal Servers with Web Content Manager 300 concurrent users simulating Wiki/Blog accesses
Single WCM DB Server Web Content Manager DB content: 50 gigs
Two XC10 Caching Appliances
Advanced Cache maximum entries
Using App Server heap: 5000 per server
Offloading to XC10: 1,000,000 shared available (Observed ~9 gigs)
WPS+
WCM 2 XC10 Collective
Proxy
WPS+
WCM
WCM DB
23. Portal Customer Experience – Steady State Comparison
Enabling WebSphere Content Manager Cache Offload Performance
Advanced Cache using an offloaded
eXtreme Scale/XC10 grid cache
With WXS/XC10 average throughput in our 100
steady state/concurrent user scenario was
90
consistently faster than with Default No WCM Advanced
80 Cache
Advanced Cache WCM Advanced
42% improvement over no Advanced 70
Cache Offloaded to
Cache in our scenario 60 XC10
50
24% throughput improvement over Throughput(requests/second)
default cache implementation
using Application Server JVM heap
in our scenario
100
Using the Default Advanced Cache
implementation requires available 90
Default WCM
Application Server heap, offloading the 80 Advanced Cache
cache to WXS/XC10 does not require 70
WCM Advanced
Cache Offloaded to
heap 60 XC10
50
Throughput(requests/second)
Performance is based on measurements and projections
using standard IBM benchmarks in a controlled
environment. Actual performance in a user's
environment may vary.
24. Portal Customer Experience – Steady State Comparison
With WXS/XC10 average steady state Cache Offload Performance
response-times are consistently faster than
with Default WebSphere Content Manager 16
Advanced Cache 14
5.5 second improvement over no 12 No WCM Advanced
Cache
Advanced Cache in our scenario 10
WCM Advanced
8 Cache Offloaded to
3.4 second improvement over default XC10
6
cache implementation using
4
Application Server JVM heap in our Response Time(seconds)
scenario
16
Performance is based on measurements and 14
projections using standard IBM 12 Default WCM
benchmarks in a controlled environment. Advanced Cache
10
Actual performance in a user's WCM Advanced
8 Cache Offloaded
environment may vary. to XC10
6
4
Response Time(seconds)
25. Reducing Portal warm-up time – Cold Start Results
With WXS/XC10 average throughput of a Cache Offload Performance
newly started server is consistently
faster than with Default WebSphere
Content Manager Advanced Cache 90
54% throughput improvement in 80 Default
our scenario 70 Advanced
Cache
60
Advanced
50 Cache
With WXS/XC10 average response-times 40 Offloaded to
are consistently faster than with XC10
30
Default Advanced Cache Throughput(requests/second)
4 second improvement observed
in our scenario
16
With WXS/XC10 response times 14
improve faster due to quicker cache 12 Default
Advanced Cache
hydration 10
Advanced Cache
8 Offloaded to
XC10
6
Performance is based on measurements
and projections using standard IBM 4
Response Time(seconds)
benchmarks in a controlled
environment. Actual performance in a
user's environment may vary.
26. Summary of Primary Benefits
WCM Advanced Cache implemented through the DynaCache, stores fully rendered
pages that do not have to be pulled out of WCM DB. Today customers can enable
Advanced Cache in the app server’s heap space. Technical goal is to avoid trips
back to the WCM database to avoid building these pages. WXS plugin allows you
to store the DynaCache content in a remote grid, so that the data being inserted
into DynaCache does not consume app server heap space.
1. Caching is of highest importance with WCM. Complex WCM components
can be very CPU intensive
2. WXS grid can store more data, have a larger hit percentage than
DynaCache and reduce trips to WCM DB which is more expensive. (More
consistent Response times)
3. Benefits customers who are heaped constrained (no DynaCache) can
leverage the Advanced Cache by not committing memory on their Portal
server. The WXS scenario does not consume memory on the Portal
server.
4. Shared cache, each portal JVM does not have to warm its cache on server
restarts
5. Eliminates invalidation chatter.. critical in the farm topology
29. Related Key Features in Recent WXS Release (8.6)
• eXtremeIO (XIO)
• Replaces Object Request Broker (ORB)
• More efficient transport layer
• eXtreme Data Format (XDF)
• Serializes data for sharing between Java and C#/.NET applications.
• Index data on server without requiring user classes to be present.
• Automatic Versioning of classes
• Portal Farm Impacts
• Further performance improvements possible with XIO.
• The elimination of data serialization requirements will broaden Portal
caches that are appropriate for offload to WXS.
29
32. Portal Advanced Cache
DynaCache instance used to store rendered content
Specifically content pulled from a Web Content Manager database
Configuration used
Site level caching (rendered content)
30 day expiration
Do not clear cache on startup
34. IBM WebSphere eXtreme Scale
• Proven mature product:
– Fourth major release of product with V7.1
– Public References
– Private References
– Used at some of the largest web
sites/companies in the world
• Lightweight runtime footprint (20MB jar)
• Integrates with all versions of WebSphere and
almost any Java-based application container or
Java Virtual Machine
• Proven multi-data center capabilities
• Proven low-latency access to data
35. IBM WebSphere DataPower XC10 V2
New Form Factor (2U)
Larger Cache (240 GB)
Better Performance (Faster SSD, Use of RAM)
Improved monitoring (SNMP Support)
Support for non-Java Applications (REST Gateway)
Grid Capping
36. Utilizing WebSphere DataPower XC10 for DynaCache
Clients can attach to the
‘cache’ using the network
No dependency on a large
file system cache.
No disk dependency, no SAN
required.
Network
Cache is as large as the
XC10 Collective memory in the ‘grid’.
Each record is stored once in
the grid and shared by all
clients.
36
37. HTTP Session data cache
No new code required
Extension of legacy session management caching
mechanism in WebSphere Application Server
Extensions to WebSphere Application Server
administrative console to support WebSphere
DataPower XC10 session management caching
and WebSphere eXtreme Scale session
management caching
WebSphere Application Server connects
seamlessly to the WebSphere DataPower XC10
appliance or WebSphere eXtreme Scale
– Client code must be installed on WebSphere
Application Server systems
Easily configure WebSphere applications to store
HTTP session data to a data cache on the
WebSphere DataPower XC10 appliance through
the WebSphere Application Sever administrative
console
Replaces other session replication mechanisms
(memory-to-memory replication)
Removes the need for Database traditionally used
for persistence
Enables HTTP session failover between
WebSphere Application Server cells
37
38. Farming: Shared installations & Session caching
•Ability to share the profile & persist session
Manage the life cycles of HTTP sessions that are associated with the application
Improve QoS and Lower Memory footprint
Better guarantees of session availability during server failover
Topology spans multiple data centers across different physical locations
Elastic Cache
DataPower XC10
Editor's Notes
3 tier Application topologies are common today. Traditionally, to scale you have to scale at all tiers, which means proliferation of web server and application server tiers and the complexity or manual work that can be required to manage & maintain them. You also have a database that you continue to scale up, but eventually becomes a bottleneck for your transaction processing, and you reach the physical and cost limitations quickly. Servers multiply quickly following this approach… There is a simpler way to address the scaling needs for your applications. Enter elastic data grids based on XC10 and WXS… Sure, you will grow your web tier and your app server tier some. But by adding an elastic data grid into your architecture, you can very quickly and easily scale out your transaction volumes with minimally invasive changes to your app and architecture. By doing this, you also drastically reduce your reads & writes on the database, cutting back on those time and resource intensive calls that created your bottleneck. We’ll talk a bit more about the details of how this works in a second. Elastic data grids are enabled by WXS running on commodity hardware (x86 boxes) or the new XC10 Appliance. Both solutions are easy to access and cost effective to “add a few more” as you continue to grow. XC10 provides 160GB cache in one box, pre-bundled to save customers time when adopting a distributed caching solution for common scenarios such as WAS HTTP Session Replication and extension of the Dynamic Cache Service.
300 active users across 2 Portal servers 2 XC10s used for XC10 cache scenario Performing Wiki/Blog browsing simulation
300 active users across 2 Portal servers 2 XC10s used for XC10 cache scenario Performing Wiki/Blog browsing simulation
Measure performance of a newly started server With Default Advanced Cache this means an empty cache With XC10 Offloaded cache, the shared cache is still warm Measure performance during first 30 minutes of load after server start
XC10_Overview.ppt Page of 14 IBM WebSphere DataPower XC10 supports session data cache for WebSphere Application Server. Session management data caching exists in current releases and previous releases of WebSphere Application Server. DataPower XC10 provides extensions so that WebSphere Application Server can now use the appliance for the session data cache, instead of relying on local JVM memory or data base storage. You can create the session data caches ahead of time in the appliance, and then using the WebSphere Application Server administrative console, you can associate servers directly with the data cache that you’ve already created. Alternatively, you can use the WebSphere Application Server administrative console to create the session data cache on the appliance at the time you enable the server for session data caching.