Simone Brunozzi gave a presentation on implementing disaster recovery strategies using AWS cloud services. He discussed how AWS can be used to backup and restore data, maintain a pilot light architecture where core systems are replicated in AWS, and implement warm standby or multi-site solutions. Key benefits of AWS for DR include reduced infrastructure costs, ability to easily scale resources, pay only for what is used, and high security. Common architectures patterns like backup/restore, pilot light, warm standby and multi-site solutions were covered along with relevant AWS services.
42. Thank You!
Disaster Recovery
with the AWS Cloud
by Simone Brunozzi
Technology Evangelist, APAC
Twitter: @simon
Editor's Notes
\n
\n
\n
On your own\nBringing on a full time consultant\nWith an ISV solution\nWith a system integrator\n
So let’s start with where DR fits into your continuity plans overall. It’s part of a business continuity continuum. And I’d like to point out that implementing DR is not an all or nothing proposition – you can work your way across the continuum, and today we’ll discuss some of the things to consider and how AWS can play a part. \n\nThe starting point is usually thinking about how to keep you applications up and running. You’ll have a requirement in the form of how many nines of reliability you need, keeping in mind that every nine you add after the first few add a lot of cost, often around 10x for each additional nine. \n\nThe next thing you’re likely to plan for is how to backup your data so it’s safe and available to you in the event of a disaster. How do you store your data so it’s durable and available when you need it. \n\nAnd then you need to have a plan for what to do in the unlikely event that you have one of those black swan events where a true disaster occurs. How do you deal with recovery? \n
Disaster recovery is at one end of that continuum, and how you choose to implement your DR is influenced by your requirements with a couple of things:\nHow long you’re able to be down; and that’s your Recovery Time Objective, or RTO\nHow much data you can tolerate losing, or how in synch does your backup data have to be with what you have in your operating environment. That’s your Recovery Point Objective, or RPO\n\nbusiness continuity timeline usually runs parallel with an incident management timeline\n\nThese are not technological things, these are business considerations. The easy answer is to have the RTO be minutes and the RPO no data loss, but that’s likely to be much more expensive than is feasible. And chances are you don’t need to be that stringent. \nSo now you can start analyze the trade-offs between the cost of achieving various recovery times and data restore. \n\nAnd you start to think about the requirements for different types of outages – say from restoring a file that was accidentally deleted through to how to handle a complete system outage due to a natural disaster. \n\nA common path to the cloud is to start with backup and recovery plans using the cloud for your backups, and then identify the applications that are candidates for you to implement a full DR plan with in the cloud. Any app that you can run in the cloud is low hanging fruit. Replicating the full stack would be at the more complex and involved end of the scale. \n\nSo you have a lot of flexibility in how you approach the solution that fits you best, and we are going to talk about what some of those architectures look like and how you can implement them.\n\n
On your own\nBringing on a full time consultant\nWith an ISV solution\nWith a system integrator\n
\n
On your own\nBringing on a full time consultant\nWith an ISV solution\nWith a system integrator\n
Disaster recovery is at one end of that continuum, and how you choose to implement your DR is influenced by your requirements with a couple of things:\nHow long you’re able to be down; and that’s your Recovery Time Objective, or RTO\nHow much data you can tolerate losing, or how in synch does your backup data have to be with what you have in your operating environment. That’s your Recovery Point Objective, or RPO\n\nbusiness continuity timeline usually runs parallel with an incident management timeline\n\nThese are not technological things, these are business considerations. The easy answer is to have the RTO be minutes and the RPO no data loss, but that’s likely to be much more expensive than is feasible. And chances are you don’t need to be that stringent. \nSo now you can start analyze the trade-offs between the cost of achieving various recovery times and data restore. \n\nAnd you start to think about the requirements for different types of outages – say from restoring a file that was accidentally deleted through to how to handle a complete system outage due to a natural disaster. \n\nA common path to the cloud is to start with backup and recovery plans using the cloud for your backups, and then identify the applications that are candidates for you to implement a full DR plan with in the cloud. Any app that you can run in the cloud is low hanging fruit. Replicating the full stack would be at the more complex and involved end of the scale. \n\nSo you have a lot of flexibility in how you approach the solution that fits you best, and we are going to talk about what some of those architectures look like and how you can implement them.\n\n
\n
We’re often asked how it is that some customers are able to reduce costs as dramatically as the claims I made earlier, while still getting the recovery performance they need. That’s a great question so I’ll take a minute to point out in simple terms one of the ways that can be accomplished. \n\n[talk to the slide]\n
On your own\nBringing on a full time consultant\nWith an ISV solution\nWith a system integrator\n
AWS has eight Regions, and each Region is a separate cloud. This gives our customers complete control over where data is stored, and a lot of options for where to host your disaster recovery site. You are literally a few mouse clicks away from deploying across the globe. This is a lot easier than doing that with off-site tape backup, your own data centers or CoLos. \n
Slide notes:\nYou can choose to deploy and run your applications in multiple physical locations within the AWS cloud. Amazon Web Services are available in geographic Regions. When you use AWS, you can specify the Region in which your data will be stored, instances run, queues started, and databases instantiated. For most AWS infrastructure services, including Amazon EC2, there are eight regions: US East (Northern Virginia), US West (Northern California), EU (Ireland), Asia Pacific (Singapore) and Asia Pacific (Tokyo), AWS GovCloud (US), US West (Oregon), and South America (Sao Paulo).\n\nWithin each Region are Availability Zones (AZs). Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from a failure (unlikely as it might be) that affects an entire zone. Regions consist of one or more Availability Zones, are geographically dispersed, and are in separate geographic areas or countries. The Amazon EC2 service level agreement commitment is 99.95% availability for each Amazon EC2 Region.\n
With AWS, you’ll see that the same security isolations are employed as would be found in a traditional data center. These include physical data center security, separation of the network, isolation of the server hardware, and isolation of storage. AWS customers have control over their data: they own the data, not us; they can encrypt their data at rest and in motion, just as they would in their own data center. \n
\n
\n
Our customers continue to make very heavy use of Amazon S3. We now process up to 500,000 S3 requests per second. Many of these are PUT requests, representing new data that is flowing in to S3. As of the end of the fourth quarter of 2011, there are 762 billion (762,000,000,000) objects in S3. \n
\n
\n
AWS Direct Connect makes it easy to establish a dedicated network connection from your premise to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple logical connections. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Logical connections can be reconfigured at any time to meet your changing needs. http://aws.amazon.com/directconnect/\n\nAmazon Virtual Private Cloud (Amazon VPC) lets you provision a private, isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. With Amazon VPC, you can define a virtual network topology that closely resembles a traditional network that you might operate in your own datacenter. You have control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration for your Amazon VPC. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet. Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter. http://aws.amazon.com/vpc/\n\nDedicated Instances are Amazon EC2 instances launched within your Amazon VPC that run hardware dedicated to a single customer. Dedicated Instances let you take full advantage of the benefits of Amazon VPC and the AWS cloud – on-demand elastic provisioning, pay only for what you use, and a private, isolated virtual network, all while ensuring that your Amazon EC2 compute instances will be isolated at the hardware level. You can easily create a VPC that contains dedicated instances only, providing physical isolation for all Amazon EC2 compute instances launched into that VPC, or you can choose to mix both dedicated instances and non-dedicated instances within the same VPC based on application-specific requirements. http://aws.amazon.com/dedicated-instances/\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Advantages to simple Backup and Restore\n Simple to get started\n Extremely cost effective (mostly backup storage)\nPreparation Phase\n Take backups of current systems\n Store backups in S3\n Describe procedure to restore from backup on AWS\n Know which AMI to use, build your own as needed\n Know how to restore system from backups\n Know how to switch to new system\n Know how to configure the deployment\n