7. NIST Definition of Cloud Computing Cloud computing is a model for enabling convenient, on-demandnetwork access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidlyprovisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models. Slide 5
8. Internet SaaS PaaS IaaS Cloud Computing Service Models Overview Application Cloud Computing Platform Infrastructure Slide 6
9.
10. The consumer uses the provider’s applications running on a cloud infrastructure.
11. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g. web-based email).
12. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities
16. The consumer deploys onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider.
17. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage
18. The consumer has control over the deployed applications and possibly application hosting environment configurations.
19. Examples of PaaS providers - Google App Engine, Force.com, and more.
22. The consumer can provision processing, storage, networks, and other fundamental computing resources.
23. The consumer can deploy and run software, which can include operating systems and applications on the provisioned resources.
24. The consumer does not manage or control the underlying cloud infrastructure
25. The consumer has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g. load balancers, IPS, etc).
26. Examples of IaaS providers are - Amazon EC2, Terremark, Savvis, and more.
38. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities
39. All clouds are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
44. Synaptic Hosting Service Synaptic Storage as a Service Cloud Computing Service Providers amazon S3 Vodafone PC Backup Application Platform Infrastructure Storage Slide 11
62. Higher Gain From External Cloud SME ERP/SCM/CRM Collaboration Numerical [Low Data/Compute] Data Warehousing Web Serving Data Mining Virtual Desktop EnterprisesStart Here Numerical [High Data Transfer] Higher Pain To Cloud Delivery Lower Pain To Cloud Delivery LE - ERP/SCM/CRM “Loosely Coupled” Architecture “Virtualized Traditional” Architecture LE - Transaction Processing “Content Centric” Architecture “Database Centric” Architecture “Storage - Analytics” Architecture Lower Gain From External Cloud Enterprise Workload Analysis Slide 20
66. On-demand Self-Service Allow customers to provision application delivery and security resources on demand via an open API Maximize revenue from the “resource self-serving” business model Add/remove capacity and services on-demand Resource Pooling Ensure hosted applications service levels Effectively and correctly redirect end-user traffic Rapid Elasticity Dynamically align application traffic and VM resources Automatic VM provisioning based on real time business events Extending the infrastructure to remote data centers Measured Service Continuously monitor resources for metering and billing purposes Gaining application awareness in the network Cloud Networks Challenges Slide 24
86. OnDemand Self Service ADC IaaS Data Center Step #1 Customer-1 enlarges CPU and RAM capacity of VM to support more traffic Self Service Portal Step #2 The self Service portal updates the VMs’ configuration through the VI Management Step #3 The self service portal updates the configuration of the ADC via its API to allow more traffic to servers Customer -1 VI Management ADC Firewall Internet Application 1 Application 2 Farm-2 Farm-1 Slide 33
87. OnDemand Self Service ADC IaaS Data Center Step #1 Customer-1 adds new application via self service portal and updates ADC Self Service Portal Step #2 The self Service portal creates application VMs through the VI Management Step #3 The self service portal creates Farm-1 on ADC and assigns the VMs to the farm Customer -1 VI Management ADC Firewall Internet Application 2 Application 1 Farm-2 Farm-1 Slide 34
94. Pay-as-you-grow approachCustomer -1 vCenter ADC Firewall Internet vAdapter Step #3 vAdapter assigns the new VMs to Farm-1 on AppDirector Application 1 Application 2 Farm-2 Farm-1 Slide 35
95. Knowing Your Network is Cloud Ready Having a Cloud ready network means providing the following Cloud Services: Slide 36
96. Elastic Application and VM Resources Alignment IaaS Data Center Step #3 Dynamically add computing resources to application and update ADC vCenter VirtualDirector Step #2 Breach of application SLA due to lack of server resources is detected Database Servers Step #1 User accesses hosted application at IaaS data center Firewall ADC Local / Global TR Front Tier Virtualization Infrastructure (VI) Internet Step #4 Redirect traffic to new resource Slide 37
100. Reduce virtual infrastructure OPEX by freeing IT resourcesDatabase Servers ADC Local / Global TR Firewall Front Tier Virtualization Infrastructure (VI) Step #1 User accesses hosted application at IaaS data center Private Cloud Data Center 1 Step #5 Dynamically add computing resources to application and update ADC vCenter Internet VirtualDirector (optional) Step #4 Breach of application SLA due to lack of server resources is detected Database Servers Step #6 Redirect traffic to new resource Firewall ADC Local / Global TR Front Tier Virtualization Infrastructure (VI)
101. Knowing Your Network is Cloud Ready Having a Cloud ready network means providing the following Cloud Services: Slide 39
123. Measured ServiceEnsuring Cloud applications’ Availability, Performance and Security Allowing you to: Have Your Feet on the Ground & Your Head in the Cloud! Slide 45
124. Thank youQuestions? Contact us at info@radware.com Or visit our website at www.radware.com
Notas del editor
One way to understand what Cloud Computing is, is to review the definition provided by NIST definition, which is accepted by most people in the industry.The most important parts of the definition are the ones marked with an underline: convenient – means it should be easy to access the Cloud Computing resourceson-demand – means users can access the Cloud Computing resources when ever they wish, 24/7networkaccess – means the access is done via any browser over the Internet.shared pool – means all the Cloud Computing resources use the same (physical) resources, out of a shared pool of (physical) resources.rapidly provisioned and released – means the Cloud Computing resources can be quickly added or removed by the user, without any intervention by the Cloud Computing provider.The five essential characteristics, three service models and four deployment models –will be reviewed later on in the presentation
Here are some figures relating to Cloud Computing, in order for you to get a grasp of the size of the market.As you can see in 2009 the estimated size of Cloud-based services was around $17.4 billion, and it is expected to almost triple by 2013.Applications and App Dev/Deploy = PaaSServers = IaaSInfrastructure Software = PaaS
As you can see in this slide, the leading IaaS providers today, host a large number of web sites.Amazon it self hosts around 2300 sites.
Here are some well known web sites which are hosted within the IaaS model.Read through slide.
In order for a data center’s network to be ready to provide Cloud-based services, it must 1st address 2 sets of challenges:The 1st set revolves around the challenges derived from the network delivering Cloud-based services. These challenges are directly linked to he characteristics of Cloud Computing and we will discuss them later in the presentation.The 2nd set of challenges revolves around the data center’s need to deliver applications to end-users. By becoming Cloud service providers, the service provider becomes an application delivery provider, which means he must now address all of the challenges of application delivery, such as application availability and security.
The 5 essential characteristics of Cloud Computing as determined by NIST are:On-demand Self-ServiceConsumers can self-provision computing and network resources, such as servers, networking, application delivery, security, etc.; Provisioning is automatic, as needed, and accomplished using Web Servicebased APIs without interaction with the service provider.Broad Network AccessServices are available over the network and accessed through standard mechanisms (e.g., web browsers, flash) Services are supported by thin or thick client platforms (e.g., smart-phones, laptops, etc).Resource PoolingThe service provider’s computing and network resources must be pooled to serve multiple consumers using a multi-tenant model. Different physical and virtual resources dynamically assigned and reassigned according to consumer demand. Rapid ElasticityComputing and network resources must be rapidly and elastically provisioned. This means the resources must also be automatically provisioned to quickly scale out and rapidly released to quickly scale in as needed.To accomplish this, resources must be available for purchase in any quantity at any time.Measured ServiceCloud systems automatically control and optimize resource use by leveraging a metering capability appropriate to the type of serviceResource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.
Each one of the essential characteristics we just described imposes its own set of challenges:On-demand Self-ServiceTo enable self-service, providers must figure out how to allow customers to add/remove/modify networking resources on demand without intervention.In addition to providing a self service option to computing resources, which is a commodity with Infrastructure as a Service, providers must now try and offer advanced self provisioning capabilities. To achieve this, they must figure out how maximize revenue from the “self-service” business model using the network infrastructure that’s already in place.Resource PoolingAs discussed, one of the main characteristics for Cloud Computing is the fact that we pool our virtual resources on the same hardware infrastructure. This creates 2 main challenges:First, cloud service providers need to ensure hosted applications maintain service levels in a shared environment where multiple apps are running on the same hardware.And secondly, they need to effectively and correctly redirect end-user traffic in this shared environment while ensuring each end-user has access to the correct application.Rapid ElasticityProviders must be able to dynamically align application traffic and VM resources to ensure they always have the exact amount of computing resources to fit end-user and application needs. This means the need for automatic VM provisioning based on real time business eventsAlso, providers must consider the ability to provision new resources in a remote data center in case the primary data center is maxed out or down.Measured ServiceProviders must gain application awareness in the network not just to ensure SLA’s are met but also for the purpose of network elements usage metering and billing.
The 1st part of Radware’s new Cloud Networks solution, is our ability to become a self service ADC. Thus giving service providers the option of allowing their customers to self-provision their application delivery needs. Please note that the self service portal is owned and managed via the Infrastructure as a Service provider and is used by the customer for all their provisioning needs such as computing and application delivery resources. So in this example we have Customer-1 which manages Application-1. Application -1 is represented and load balanced on Radware’s ADC as Farm-1.To accommodate an expected growth in traffic, customer-1 would like to increase the CPU and RAM of their virtual machines.Run 1st and 2nd animation and explain exampleThis whole process is done via the service provider’s “self service portal”.However it is not enough to only enlarge the application VM, the customer would also like to enlarge the bandwidth capacity provided to Farm-1 by the ADC.Run 3rd animation and explain exampleThis process is also done via the service provider’s “self service portal”.
Best-of-breed self-serving ADCA customer can easily add a new VIP to the ADC representing his hosted applicationThe ADC automatically measures the traffic and number of users for billing purposesFull support for network requirementsFacilitate the generation of new revenue from existing ADC infrastructureApplication delivery as a serviceApplication acceleration capabilities as add-on servicesOn demand throughput and service scalability Cost-effectively accommodate future growth in the number of users, applications, and traffic served by the ADC.Full investment protection, increased asset ROI,and CAPEX savings - no forklift upgrade requiredPay-as-you-grow approach- pay for the exact capacity required, and flexibly scale up when more is needed
In the case where the service provider does not wish to allow customers to directly modify the ADC’s configuration, it is possible to use Radware’s vAdapter which will automatically update the ADC’s farm configuration whenever a modification is made to the applications’ VMs.vAdapter is a virtual appliance which allows automatic synchcronization between changes in the virtual environment and the ADC’s configuration.Thus removing the need for the customer to manually modify the ADC’s configuration when ever they self provision new computing resources.Run animation and explain exampleThere are many business benefits to this solution including:Best-of-breed self-serving ADCYou get all the benefits of Radware’s market leading ADC coupled with self-serving capabilities.Service Providerscan now purchase Radware’sCloud Network Ready ADC and use it as an infrastructure ADC until they are ready to provide their customers with application delivery services. Once they are ready, the ADC is already in place and ready to go.Customers can easily add a new VIP or Farm to the ADCThe ADC automatically measures the traffic and number of users for billing purposesOpen API for external ADC managementReal-time alignment of resources with the networkFull support for current and future network requirementsFacilitates the generation of new revenue from the existing ADC infrastructure by being able to offerApplication delivery as a serviceApplication acceleration capabilities as add-on servicesService providers have on demand throughput and service scalability to cost-effectively accommodate future growth in the number of users, applications, and traffic served by the ADC.Full investment protection, increased asset ROI,and CAPEX savings - no forklift upgrade requiredPay-as-you-grow approach- pay for the exact capacity required, and flexibly scale up when more is needed
The 2nd part of Radware’s new Cloud Networks solution, is our ability to provide application elasticity and complete alignment between the application’s needs and the network’s virtual computing resources.This is done by combining Radware’s ADC and VirtualDirector products. VirtualDirector monitors the performance of the application and network and if the required service level is not acceptable, it automatically adds additional VMs in order to improve the application’s response time.Run animation and explain example according to the callouts <STEP 4 IS LAST CLICK>As can be seen through the example, Radware’s solution provides complete elasticity to the data center’s resources. Resources can be added/removed automatically according to the application needs.This ensures there are always sufficient resources to accommodate the application needs, but also prevents the existence of unused resources.
In the previous example we saw how Radware’s solution ensures complete elasticity within a single data center. In this example, we can see how Radware’s solution can also provide elasticity on multiple data centers, thus accommodating IaaS providers’ disaster recovery needs.Using Radware’s best-of-breed Global Traffic Redirection solution, IaaS providers can now ensure that end-users are always served from the data center closest to them (using our patented Proximity feature). Once an end-user reaches the closest data center, then we automatically check his/her application performance, and if needed we will add an additional resource.Run animation and explain example according to the callouts <STEP 6 IS LAST CLICK>As can be seen through the example, Radware’s solution provides complete elasticity across multiple data centers’ resources. Resources can be added/removed automatically according to the application needs ensuring there are always sufficient resources to accommodate the application needs while preventing the existence of unused resources.Business benefits:Ensure business applications get the resources they needAlign business application requirements with the infrastructureGain cloud virtual infrastructure elasticityGuarantee best response time for the end-userFacilitates virtual infrastructure OPEX reductions by freeing IT resourcesResource Allocation based on application response timeGlobal traffic management based on business parametersByDynamically provisioning computing resources on-demand, both per single data center and across multiple data centers.While taking into consideration the available capacity within each data center.