Implement Affordable Disaster Recovery with Hyper-V and Multi-Site Clustering
1. Implementing Affordable Disaster Recovery with Hyper-V andMulti-Site Clustering Greg Shields, MVPPartner and Principal Technologistwww.ConcentratedTech.com
4. A naturally-occurring event, such as a tornado, flood, or hurricane, impacts your datacenter and causes damage. That damage causes the entire processing of that datacenter to cease.
5. A widespread incident, such as a water leakage or long-term power outage, that interrupts the functionality of your datacenter for an extended period of time.
6. A problem with a virtual host creates a “blue screen of death”, immediately ceasing all processing on that server.
7. An administrator installs a piece of code that causes problems with a service, shutting down that service and preventing some action from occurring on the server.
8.
9. A naturally-occurring event, such as a tornado, flood, or hurricane, impacts your datacenter and causes damage. That damage causes the entire processing of that datacenter to cease.
10. A widespread incident, such as a water leakage or long-term power outage, that interrupts the functionality of your datacenter for an extended period of time.
11. A problem with a virtual host creates a “blue screen of death”, immediately ceasing all processing on that server.
12. An administrator installs a piece of code that causes problems with a service, shutting down that service and preventing some action from occurring on the server.
13. An issue with power connections causes a server or an entire rack of servers to inadvertently and rapidly power down.DISASTER! JUST A BAD DAY!
14. What Makes a Disaster? Your decision to “declare a disaster” and move to “disaster ops” is a major one. The technologies used for disaster protection are different than those used for high-availability. More complex. More expensive.
15. What Makes a Disaster? Your decision to “declare a disaster” and move to “disaster ops” is a major one. The technologies used for disaster protection are different than those used for high-availability. More complex. More expensive. Failover and failback processes involve more thought. You might not be able to just “fail back” with a click of a button.
16. A Disastrous Poll Where are We? Who Here is… Planning a DR Environment? In Process of Implementing One? Already Enjoying One? What’s a “DR Environment” ???
17. Multi-Site Hyper-V == Single-Site Hyper-V DON’T PANIC: Multi-site Hyper-V looks very much the same as single-site Hyper-V. Microsoft has not done a good job of explaining this fact! Some Hyper-V hosts. Some networking and storage. Virtual machines that Live Migrate around.
18. Multi-Site Hyper-V == Single-Site Hyper-V DON’T PANIC: Multi-site Hyper-V looks very much the same as single-site Hyper-V. Microsoft has not done a good job of explaining this fact! Some Hyper-V hosts. Some networking and storage. Virtual machines that Live Migrate around. But there are some major differences too… VMs can Live Migrate across sites. Sites typically have different subnet arrangements. Data in the primary site must be replaced with the DR site. Clients need to know where your servers go!
19. Constructing Site-Proof Hyper-V:Three Things You Need At a very high level, Hyper-V disaster recovery is three things: A storage mechanism A replication mechanism A set of target servers and a cluster to receive virtual machines and their data Once you have these three things, layering Hyper-V atop is easy.
21. Thing 1:A Storage Mechanism Typically, two SANs in two different locations Fibre Channel , iSCSI, FCoE, heck JBOD. Often similar model or manufacturer. This similarity can be necessary (although not required) for some replication mechanisms to function property.
22. Thing 1:A Storage Mechanism Typically, two SANs in two different locations Fibre Channel , iSCSI, FCoE, heck JBOD. Often similar model or manufacturer. This similarity can be necessary (although not required) for some replication mechanisms to function property. Backup SAN doesn’t necessarily need to be of the same size or speed as the primary SAN Replicated data isn’t always full set of data. You may not need disaster recovery for everything. DR Environments: Where Old SANs Go To Die.
23. Thing 2:A Replication Mechanism Replication between SANs must occur. There are two commonly-accepted ways to accomplish this….
24. Thing 2:A Replication Mechanism Replication between SANs must occur. There are two commonly-accepted ways to accomplish this…. Synchronously Changes are made on one node at a time. Subsequent changes on primary SAN must wait for ACK from backup SAN. Asynchronously Changes on backup SAN will eventually be written. Changes queued at primary SAN to be transferred at intervals.
25. Thing 2:A Replication Mechanism Synchronously Changes are made on one node at a time. Subsequent changes on primary SAN must wait for ACK from backup SAN.
26. Thing 2:A Replication Mechanism Asynchronously Changes on backup SAN will eventually be written. Are queued at primary SAN to be transferred at intervals.
39. Thing 2½:Replication Processing Location There are also two locations for replication processing… Storage Layer Replication processing is handled by the SAN itself. Agents are often installed to virtual hosts or machines to ensure crash consistency. Easier to set up, fewer moving parts. More scalable. Concerns about crash consistency. OS / Application Layer Replication processing is handled by software in the VM OS. This software also operates as the agent. More challenging to set up, more moving parts. More installations to manage/monitor. Scalability and cost are linear. Fewer concerns about crash consistency.
40. Thing 3:Target Servers and a Cluster Finally are target servers and a cluster in the backup site.
41. Clustering’s Sordid History Windows NT 4.0 Microsoft Cluster Service “Wolfpack”. “As the corporate expert in Windows clustering, I recommend you don’t use Windows clustering.”
42. Clustering’s Sordid History Windows NT 4.0 Microsoft Cluster Service “Wolfpack”. “As the corporate expert in Windows clustering, I recommend you don’t use Windows clustering.” Windows 2000 Greater availability, scalability. Still painful. Windows 2003 Added iSCSI storage to traditional Fibre Channel. SCSI Resets still used as method of last resort (painful).
43. Clustering’s Sordid History Windows NT 4.0 Microsoft Cluster Service “Wolfpack”. “As the corporate expert in Windows clustering, I recommend you don’t use Windows clustering.” Windows 2000 Greater availability, scalability. Still painful. Windows 2003 Added iSCSI storage to traditional Fibre Channel. SCSI Resets still used as method of last resort (painful). Windows 2008 Eliminated use of SCSI Resets. Eliminated full-solution HCL requirement. Added Cluster Validation Wizard and pre-cluster tests. Clusters can now span subnets (ta-da!)
44. Clustering’s Sordid History Windows NT 4.0 Microsoft Cluster Service “Wolfpack”. “As the corporate expert in Windows clustering, I recommend you don’t use Windows clustering.” Windows 2000 Greater availability, scalability. Still painful. Windows 2003 Added iSCSI storage to traditional Fibre Channel. SCSI Resets still used as method of last resort (painful). Windows 2008 Eliminated use of SCSI Resets. Eliminated full-solution HCL requirement. Added Cluster Validation Wizard and pre-cluster tests. Clusters can now span subnets (ta-da!) Windows 2008 R2 Improvements to Cluster Validation Wizard and Migration Wizard. Additional cluster services. Cluster Shared Volumes (!) and Live Migration (!)
48. Quorum: Windows Clustering’s Most Confusing Configuration Ever been to a Kiwanis meeting…?
49. Quorum: Windows Clustering’s Most Confusing Configuration Ever been to a Kiwanis meeting…? A cluster “exists” because it has quorum between its members. That quorum is achieved through a voting process. Different Kiwanis clubs have different rules for quorum. Different clusters have different rules for quorum.
50. Quorum: Windows Clustering’s Most Confusing Configuration Ever been to a Kiwanis meeting…? A cluster “exists” because it has quorum between its members. That quorum is achieved through a voting process. Different Kiwanis clubs have different rules for quorum. Different clusters have different rules for quorum. If a cluster “loses quorum”, the entire cluster shuts down and ceases to exist. This happens until quorum is regained. This is much different than a resource failover, which is the reason why clusters are implemented. Multiple quorum models exist.
51. Four Options for Quorum Node and Disk Majority Node Majority Node and File Share Majority No Majority: Disk Only
52. Four Options for Quorum Node and Disk Majority Node Majority Node and File Share Majority No Majority: Disk Only
53. Four Options for Quorum Node and Disk Majority Node Majority Node and File Share Majority No Majority: Disk Only
54. Four Options for Quorum Node and Disk Majority Node Majority Node and File Share Majority No Majority: Disk Only
55. Quorum in Multi-Site Clusters Node and Disk Majority Node Majority Node and File Share Majority No Majority: Disk Only Microsoft recommends using the Node and File Share Majority model for multi-site clusters. This model provides the best protection for a full-site outage. Full-site outage requires a file share witness in a third geographic location.
56. Quorum in Multi-Site Clusters Use the Node and File Share Quorum Prevents entire-site outage from impacting quorum. Enables creation of multiple clusters if necessary. Third Site for Witness Server
57. I Need a Third Site? Seriously? Here’s where Microsoft’s ridiculous quorum notion gets unnecessarily complicated… What happens if you put the quorum’s file share in the primary site? The secondary site might not automatically come online after a primary site failure. Votes in secondary site < Votes in primary site Let’s count on our fingers…
58. I Need a Third Site? Seriously? Here’s where Microsoft’s ridiculous quorum notion gets unnecessarily complicated… What happens if you put the quorum’s file share in the secondary site? A failure in the secondary site could cause the primary site to go down. Votes in secondary site > votes in primary site. More fingers… This problem gets even weirder as time passes and the number of servers changes in each site.
59. I Need a Third Site? Seriously? Third Site for Witness Server
61. Multi-Site Cluster Tips/Tricks Install servers to sites so that your primary site always contains more servers than backup sites. Eliminates some problems with quorum during site outage.
62. Multi-Site Cluster Tips/Tricks Manage Preferred Owners & Persistent Mode options. Make sure your servers fail over to servers in the same site first. But also make sure they have options on failing over elsewhere.
64. Multi-Site Cluster Tips/Tricks Manage Preferred Owners & Persistent Mode options. Make sure your servers fail over to servers in the same site first. But also make sure they have options on failing over elsewhere. Consider carefully the effects of Failback. Failback is a great solution for resetting after a failure. But Failback can be a massive problem-causer as well. Its effects are particularly pronounced in Multi-Site Clusters. Recommendation: Turn it off, (until you’re ready).
66. Multi-Site Cluster Tips/Tricks Resist creating clusters that support other services. A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster.
67. Multi-Site Cluster Tips/Tricks Resist creating clusters that support other services. A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster. Use disk “dependencies” as Affinity/Anti-Affinity rules. Hyper-V all by itself doesn’t have an elegant way to affinitize. Setting disk dependencies against each other is a work-around.
68. Multi-Site Cluster Tips/Tricks Resist creating clusters that support other services. A Hyper-V cluster is a Hyper-V cluster is a Hyper-V cluster. Use disk “dependencies” as Affinity/Anti-Affinity rules. Hyper-V all by itself doesn’t have an elegant way to affinitize. Setting disk dependencies against each other is a work-around. Add Servers in Pairs Ensures that a server loss won’t cause site split brain. This is less a problem with the File Share Witness configuration.
70. Most Important! Ensure that networking remains available when VMs migrate from primary to backup site.
71. Most Important! Ensure that networking remains available when VMs migrate from primary to backup site. Clustering can span subnets!This is good, but only if you plan for it… Remember that crossing subnets also means changing IP address, subnet mask, gateway, etc, at new site. This can be automatically done by using DHCP and dynamic DNS, or must be manually updated. DNS replication is also a problem. Clients will require time to update their local cache. Consider reducing DNS TTL or clearing client cache.
72. Implementing Affordable Disaster Recovery with Hyper-V andMulti-Site Clustering Greg Shields, MVPPartner and Principal Technologistwww.ConcentratedTech.com