#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Disaster Recovery with Hyper-V Replication in Windows Server 2012
1. Microsoft Windows Server 2012
Seminar: Disaster Recovery met Hyper-V
replication in Windows Server 2012
Bent u op zoek naar een Disaster Recovery (DR) oplossing zonder de
hoge kosten zoals bij SAN replication komen kijken? Dan is Hyper-V
Replica iets voor u. Deze nieuwe feature is bedoeld voor klein- en
midden-MKB. Het is een storage log asynchrone opslagtechniek waarbij
iedere 5 minuten alle wijzigingen van een Virtual Machine worden
gerepliceerd naar de Disaster Recovery locatie. Hiermee heeft u een
goedkope Disaster Recovery oplossing waarmee hooguit 5 minuten aan
data verloren gaan. U kunt deze techniek ook gebruiken wanneer u geen
Disaster Recovery locatie heeft, want de replica kan ook naar de public
cloud!!
2.
3. Windows Server 2012
Trends and Challenges
Hyper-V Replica
Get Started: Advies en Doen!
7. New Device
apps proliferation Data explosion Cloud computing
8.
9.
10. Manage virtual machines independently Live migration Live migration of
from underlying infrastructure within a cluster storage
Shared-nothing live Hyper-V
Handle changing needs on demand migration Replica
11. Primary site Replica site
Exchange virtual machine
CRM virtual machine IIS virtual machine Exchange
replica
SQL virtual machine virtual
SharePoint virtual machine
machine
CRM
replica
virtual
machine
R2
Replicate over R3
P1 P2 R1
WAN link
SMB file share SAN
Hyper-V role and tools Hyper-V role and tools
Hyper-V Hyper-V PS Hyper-V Hyper-V PS
cmdlets integrated UI cmdlets integrated UI
Send/receive
replica traffic
Hyper-V Management Module Hyper-V Management Module
tracks and replicates changes for receives and applies the changes to
each virtual machine the replica virtual machine
12. Live migration based on Modified pages transferred
Memorymigration setup
Storage handle moved
Live
server message block (SMB)
share
Improvements VM Modified memory pages
Configuration data
Memory content VM
MEMORY
• Faster and simultaneous migration
• Live migration outside a clustered
environment
IP connection
• Store virtual machines on a File Share
SMB network storage
13. DiskReads are mirrored; outstanding
Disk contentswrites go to to new
writes and are copied new
Live migration of storage Reads and writes go to the source VHD
changes are replicated
destination VHD
Move virtual hard disks attached
to a running virtual machine
Computer
Benefits running
• Manage storage in a cloud environment Virtual machine
Hyper-V
with greater flexibility and control
• Move storage with no downtime
• Update physical storage available to a
virtual machine (such as SMB-based Source device Target device
storage)
• Windows PowerShell cmdlets
14. Readswrites are mirrored;
Disk and writes go to the
Disk contents writes go toto new
Reads and are copied the
Shared-nothing live source VHD. Live Migration
Live Migration Completes
Live Migration Continues
outstanding changes are
destination VHD
source VHD
migration replicated
Begins
Source Live Migration
Destination
Hyper-V Configuration data
Hyper-V
MEMORY
Modified memory pages
Memory content
Benefits Virtual
machine
Virtual
machine
• Increase flexibility of virtual machine IP connection
placement
• Increase administrator efficiency
• Reduce downtime for migrations across
cluster boundaries Source device Target device
15. Run more Take advantage of
demanding newer Bigger, faster virtual machines
applications with hardware, while still
better performance using existing Guest applications
hardware to take advantage of
maximum improved Non-
advantage Uniform Memory
Hardware Access (NUMA)
offloading support
16. MAXIMUM NUMBER
Windows Server Improvement
System Resource Windows 2008 R2 2012 factor
Host Logical processors on
64 320 5
hardware
Physical memory 1 TB 4 TB 4
Virtual processors per host 512 2,048 4
Virtual Virtual processors per virtual
4 64 16
machine
machine
Memory per virtual machine 64 GB 1 TB 16
Active virtual machines 384 1,024 2.7
Cluster Nodes 16 64 4
Virtual machines 1,000 4,000 4
17. Improvements for Hyper-V
Maximum
memory
Memory in use
Maximum
Dynamic Memory memory Memory in use
• Introduced in Windows Server 2008 R2 SP1
Minimum
• Reallocates memory automatically among memory
Administrator can
running virtual machines increase maximum
memory without a
VM1 restart
Windows Server 2012 Hyper-V
improvements
• Minimum memory Physical
memory
pool
• Hyper-V smart paging
• Memory ballooning
• Runtime configuration
18. Improvements for Hyper-V
Maximum
Maximum
memory
memory
Maximum
memory Startup increases
memory in use
Benefits Minimum
Memory in use
after startup
memory
• Higher consolidation numbers Minimum Minimum
memory memory
• Improved reliability of Hyper-V operations
• Ability to increase maximum memory VM1 VM2 VMn
configuration with minimal downtime
Hyper-V Paging file provides
Memory reclaimed
additional memory
after startup
for startup
Physical
memory
pool
Removing paged memory with
Virtual machine starting after
virtual machine restart
Hyper-V smart paging
19. vNUMA node A vNUMA node B vNUMA node A vNUMA node B
Non-Uniform Memory
Access
• Projects NUMA topology onto a virtual
machine
• Allows guest operating systems and
applications to make intelligent NUMA NUMA node 1 NUMA node 2 NUMA node 3 NUMA node 4
decisions
• Aligns guest NUMA nodes with host
resources
Guest NUMA topology by default
matches host NUMA topology
20. • Multiple modes: switch dependent and
independent
• Hashing modes: port and 4-tuple
• Active/active and active/standby Virtual
adapters Team network Team network
adapter adapter
21. Hyper-V Network Virtualization
Example
Blue Corp Customer Provider
Datacenter
Address Address network
10.1.1.1 192.168.1.10
192.168.10 192.168.11 192.168.12 192.168.13
10.1.1.2 192.168.1.12
Yellow Corp
Customer Provider
Address Address
10.1.1.1 10.1.1.1 10.1.1.2 10.1.1.2
10.1.1.1 192.168.1.11
10.1.1.2 192.168.1.13
Policy settings Customer address spaces
How IP address rewrite works Benefits
Maps each Customer Address (CA) to a unique Provider Requires no upgrade of network adapters, switches, or
Address (PA) network appliances
Sends information in regular TCP/IP packets on the wire Can be deployed today without sacrificing performance
22. User selects the virtual machine to import/register
User selects User selects
remote registration in-place registration
Copies of the configuration
file and saved state are
copied to the destination
Validation occurs on the new host
If required, “fix it” wizard is used for repair operations
Virtual machine is ready to start up
23. VHDX
Features Large allocations Data region (large allocations and 1 MB aligned)
and 1 MB aligned
• Storage capacity up to 64 TBs Block Allocation
User data blocks
Intent log Table (BAT)
• Corruption protection during power failures Sector bitmap blocks
• Optimal structure alignment for large-sector
disks Header region Metadata region (small allocations and unaligned)
User metadata
Header Metadata table
Benefits File metadata
• Increases storage capacity
• Protects data
• Helps to ensure quality performance on
large-sector disks
24. Access Fibre Channel SAN Hyper-V host 1 Hyper-V host 2
data from a virtual machine
• Unmediated access to a storage area
network (SAN)
• Hardware-based I/O path to virtual hard Worldwide Worldwide Worldwide Worldwide
disk stack Name Set A Name Set B Name Set A Name Set B
• N_Port ID Virtualization (NPIV) support
• Single Hyper-V host connected to different
SANs
• Up to four Virtual Fibre Channel adapters
on a virtual machine
• Multipath I/O (MPIO) functionality Live migration maintaining
• Live migration Fibre Channel connectivity
27. MCSA: Windows Server 2012
+ + =
Installing and Configuring Advanced
Configuring Windows Administering Windows Windows Server 2012 MCSA: Windows Server
Server 2012 Server 2012 Services 2012
Installing and Configuring Advanced
Configuring Windows Administering Windows Windows Server 2012
Server 2012 Server 2012 Services Find a Learning Partner
28. MCSE: Server Infrastructure
* Requires
recertification
+ + =
Designing and Implementing an
Implementing a Server Advanced Server MCSE: Server
Windows Server 2012 Infrastructure Infrastructure Infrastructure
Designing and Implementing an
Implementing a Server Advanced Server
Infrastructure Infrastructure Find a Learning Partner
29. MCSE: Desktop Infrastructure
* Requires
recertification
+ + =
Implementing Desktop
Implementing a Desktop Application MCSE: Desktop
Windows Server 2012 Infrastructure Environments Infrastructure
Implementing Desktop
Implementing a Desktop Application
Infrastructure Environments Find a Learning Partner
30. Upgrade paths
Windows Server 2012
Designing and
Implementing a Server Implementing an Advanced
Server Infrastructure
Infrastructure Server Infrastructure
Any of the following certifications qualify:
• MCSA: Windows Server 2008*
•
•
•
MCITP: Virtualization Administrator
MCITP: Enterprise Messaging Administrator
MCITP: Lync Server Administrator
Either or
• MCITP: SharePoint Administrator Upgrading Your Skills to
• MCITP: Enterprise Desktop Administrator
MCSA Windows Server
2012 Both
Implementing a Desktop Implementing Desktop
Desktop Infrastructure
Infrastructure Application Environments
31. Stap 1 Primary Server
Root Key van Primary Server
makecert -pe -n "CN=PrimaryRootCA" -ss root -sr
LocalMachine -sky signature -r "PrimaryRootCA.cer"
32. Stap 2 Primary Server
Private Key van de Primary Server
makecert -pe -n "CN=servernaam.domain.com" -ss my -sr
LocalMachine -sky exchange -eku
1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in "PrimaryRootCA" -is
root -ir LocalMachine -sp "Microsoft RSA SChannel
Cryptographic Provider" -sy 12 PrimaryCert.cer
33. Satp 3 Replica Server
Root Key van Replica Server
makecert -pe -n "CN=ReplicaRootCA" -ss root -sr
LocalMachine -sky signature -r "ReplicaRootCA.cer"
34. Stap 4 Replica Server
Private Key van de Replica Server
makecert -pe -n "CN=servernaam.domain.com" -ss my -sr
LocalMachine -sky exchange -eku
1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 -in "ReplicaRootCA" -is
root -ir LocalMachine -sp "Microsoft RSA SChannel
Cryptographic Provider" -sy 12 ReplicaCert.cer
35. Stap 5 Import
Importeer de Root Certificaten op beide servers. En de
Private Certificaten alleen op de betreffende server
Schakel Revocation uit d.m.v. deze key in de registry
reg add "HKLMSOFTWAREMicrosoftWindows
NTCurrentVersionVirtualizationFailoverReplication" /v
DisableCertRevocationCheck /d 1 /t REG_DWORD /f
De tool makecert staat in de Windows SDK die je kunt
downloaden bij Microsoft Download
Notas del editor
Windows Server 2012 brings Microsoft’s experience from building and operating public clouds to deliver a highly dynamic, available, and cost-effective server platform for your private cloud. It offers businesses and hosting providers a scalable, dynamic, and multitenant-aware cloud infrastructure that securely connects across premises and allows IT to respond to business needs faster and more efficiently. Microsoft’s Cloud OS uniquely delivers on customer needs across these scenarios. The Cloud OS is a consistent platform with a common set of technologies you can use to develop and manage applications for all environments using the same skills, knowledge and experience:Agile development Platform: Use the tools you know build the apps you need, new modern apps and traditional apps, wherever they need to run to get to your customers or users. Those tools may be Visual Studio and .NET or open source technologies and languages, such REST, JSON, PHP, Java.Unified Dev-ops & Management: Use System Center as single pane of glass for all apps coupled with Visual Studio as common platform to build once, deploy anywhere with integration to manage apps across their lifecycles for quick time to solution and easy troubleshooting/management.Common identity: Implement Active Directory as a powerful asset across environments to help you extend your enterprise to the cloud with internet scale security using a single identity and/or securely extend apps and data to devices.Integrated virtualization: Microsoft is engineered for cloud from the metal up with virtualization built as an integrated element of the OS, not layered on the OS with no need for additional add-ons.Complete data platform: Microsoft delivers comprehensive technologies to manage petabytes of data in the cloud to millions of transactions for your most mission-critical applications to billions of rows in the hands of end users for predictive and adhoc analytics in IT-managed offerings. Microsoft uniquely delivers the Cloud OS as a consistent and comprehensive set of capabilities across on-premises, Microsoft Cloud or service provider’s cloud to support the world’s apps and data anywhere.
Cloud and mobility are two major trends that have started to affect the IT landscape, in general, and the datacenter, in particular. There are four key IT questions that customers claim are keeping them up at night:How do I embrace the cloud?With a private cloud, you get many of the benefits of public cloud computing—including self-service, scalability, and elasticity—with the additional control and customization available from dedicated resources. Microsoft customers can build a private cloud today with Windows Server 2008 R2, Microsoft Hyper-V, and Microsoft System Center, but there are many questions about how to best scale and secure workloads on private clouds and how to costeffectively build private clouds, offer cloud services, and connect more securely to cloud services.How do I increase the efficiency in my datacenter?Whether you are building your own private cloud, are in the business of offering cloud services, or simply want to improve the operations of your traditional datacenter, lowering infrastructure costs and operating expenses while increasing overall availability of your production systems is critical. Microsoft understands that efficiency built into your server platform and good management of your cloud and datacenter infrastructure are important to achieving operational excellence.How do I deliver next-generation applications?As the interest in cloud computing and providing web-based IT services grows, our customers tell us that they need a scalable web platform and the ability to build, deploy, and support cloud applications that can run on-premises or in the cloud. They also want to be able to use a broad range of tools and frameworks for their next-generation applications, including open source tools.How do I enable modern work styles?As the lines between people’s lives and their work blur, their personalities and individual work styles have an increasing impact on how they get their work done—and which technologies they prefer to use. As a result, people increasingly want a say in what technologies they use to complete work. This trend is called “Consumerization of IT.” As an example of consumerization,more and more people are bringing and using their own PCs, slates, and phones to work.Consumerization is great as it unleashes people’s productivity, passion, innovation, and competitive advantage. We at Microsoft believe that there is power in saying “yes” to people and their technology requests in a responsible way. Our goal at Microsoft is to partner with you in IT, to help you embrace these trends while ensuring that the environment is more secure and better managed.
NOTE: This slide is animated and has 3 clicksIn this scenario we will discuss how you can achieve Increased business flexibility with virtual machine mobility.[Click]What we are going to talk about here are the different ways of moving a virtual machine around between different servers, the things that we have done with Windows Server 2012 Hyper‑V that allow us to gain a benefit for our customers of being able to manage the virtual machines independently of thier underlying and physical infrastructure.[Click]Also, you need to be able to handle the changes in demand as they occur. You have a need to rebalance where the virtual machines are located either via through the servers the VMs reside on, or the storage resources used by the virtual machine.[Click]Within Windows Server 2012 we provide these values through:Live Migration within a clusterLive Migration of storageShared nothing live migrationHyper-V Replica
Current situationBusiness continuity is the ability to quickly recover business functions from a downtime event with minimal or no data loss. There are number of reasons why businesses experience outage including power failure, IT hardware failure, network outage, human errors, IT software failures, and natural disasters. Depending on the type of outage, customers need a high availability solution that simply restores the service. However, some outages that impact the entire data center such as natural disaster or an extended power outage require a disaster recovery solution that restores data at a remote site in addition to bringing up the services and connectivity. Organizations need an affordable and reliable business continuity solution that helps them recover from a failure.Before Windows Server 2012Beginning with Windows Server 2008 R2, Hyper‑V and Failover Clustering can be used together to make a virtual machine highly available and minimize disruptions. Administrators can seamlessly migrate their virtual machines to a different host in the cluster in the event of outage or to load balance their virtual machines without impacting virtualized applications. While this can protect virtualized workloads from a local host failure or scheduled maintenance of a host in a cluster, this does not protect businesses from outage of an entire data center. While Failover Clustering can be used with hardware-based SAN replication across data centers, these are typically expensive. Hyper‑V Replica fills an important gap in the Windows Server Hyper‑V offering by providing an affordable in-box disaster recovery solution. Windows Server 2012 Hyper‑V ReplicaWindows Server 2012 introduces Hyper‑V Replica, a built-in feature that provides asynchronous replication of virtual machines for the purposes of business continuity and disaster recovery. In the event of failures (such as power failure, fire, or natural disaster) at the primary site, the administrator can manually fail over the production virtual machines to the Hyper‑V server at the recovery site. During failover, the virtual machines are brought back to a consistent point in time, and within minutes they can be accessed by the rest of the network with minimal impact to the business. Once the primary site comes back, the administrators can manually revert the virtual machines to the Hyper‑V server at the primary site.Hyper‑V Replica is a new feature in Windows Server 2012. It lets you replicate your Hyper‑V virtual machines over a network link from one Hyper‑V host at a primary site to another Hyper‑V host at a Replica site without reliance on storage arrays or other software replication technologies. The figure shows secure replication of virtual machines from different systems and clusters to a remote site over a WAN.Benefits of Hyper‑V ReplicaHyper‑V Replica fills an important gap in the Windows Server Hyper‑V offering by providing an affordable in-box business continuity and disaster recovery solution. Failure recovery in minutes. In the event of an unplanned shutdown, Hyper‑V Replica can restore your system in just minutes.More secure replication across the network. Hyper‑V Replica tracks the write operations on the primary virtual machine and replicates these changes to the Replica server efficiently over a WAN. The network connection between the two servers uses the HTTP or HTTPS protocol and supports both integrated and certificate-based authentication. Connections configured to use integrated authentication are not encrypted; for an encrypted connection, you should choose certificate-based authentication. Hyper‑V Replica is closely integrated with Windows failover clustering and provides easier replication across different migration scenarios in the primary and Replica servers.Hyper‑V Replica doesn’t rely on storage arrays.Hyper‑V Replica doesn’t rely on other software replication technologies.Hyper‑V Replica automatically handles live migration.Configuration and management are simpler with Hyper‑V Replica:Integrated user interface (UI) with Hyper‑V Manager. Failover Cluster Manager snap-in for Microsoft Management Console (MMC).Extensible WMI interface.Windows PowerShell command-line interface scripting capability.RequirementsTo use Hyper‑V Replica, you need two physical computers configured with:Windows Server 2012.Hyper‑V server role.Hardware that supports the Hyper‑V role.Sufficient storage to host the files that virtualized workloads use. Additional storage on the Replica server based on the replication configuration settings may be necessary.Sufficient network bandwidth among the locations that host the primary and Replica servers and sites.Firewall rules to permit replication between the primary and Replica servers and sites.Failover Clustering feature, if you want to use Hyper‑V Replica on a clustered virtual machine.
NOTE: This slide is animated and has 5 clicksTo maintain optimal use of physical resources and to add new virtual machines easily, you must be able to move virtual machines whenever necessary – without disrupting your business. Windows Server 2008 R2 introduced live migration, which made it possible to move a running virtual machine from one physical computer to another with no downtime and no service interruption. However, this assumed that the virtual hard disk for the virtual machine remained consistent on a shared storage device such as a Fibre Channel or iSCSI SAN. In Windows Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries, including to any Hyper-V host server in your environment. Hyper-V builds on this feature, adding support for simultaneous live migrations, enabling you to move several virtual machines at the same time. When combined with features such as Network Virtualization, this feature even allows virtual machines to be moved between local and cloud hosts with ease.In this example, we are going to show how live migration works when connected to an SMB File Share. With Windows Server 2012 and SMB3, you can store your virtual machine hard disk files and configuration files on an SMB share and live migrate the VM to another host whether that host is part of a cluster or not.[Click]Live migration setup: During the live migration setup stage, the source host creates a TCP connection with the destination host. This connection transfers the virtual machine configuration data to the destination host. A skeleton virtual machine is set up on the destination host, and memory is allocated to the destination virtual machine.[Click]Memory page transfer: In the second stage of a SMB-based live migration, the memory that is assigned to the migrating virtual machine is copied over the network from the source host to the destination host. This memory is referred to as the “working set” of the migrating virtual machine. A page of memory is 4 KB.During this phase of the migration, the migrating virtual machine continues to run. Hyper-V iterates the memory copy process several times, with each iteration requiring a smaller number of modified pages to be copied. After the working set is copied to the destination host, the next stage of the live migration begins.[Click]Memory page copy process: This stage is a memory copy process that duplicates the remaining modified memory pages for “Test VM” to the destination host. The source host transfers the CPU and device state of the virtual machine to the destination host.During this stage, the available network bandwidth between the source and destination hosts is critical to the speed of the live migration. Use of a 1‑gigabit Ethernet (GbE) or faster connection is important. The faster the source host transfers the modified pages from the migrating virtual machine’s working set, the more quickly live migration is completed.The number of pages transferred in this stage is determined by how actively the virtual machine accesses and modifies the memory pages. The more modified pages, the longer it takes to transfer all pages to the destination host.[Click]Moving the storage handle from source to destination: During this stage of a live migration, control of the storage that is associated with “Test VM”, such as any virtual hard disk files or physical storage attached through a virtual Fibre Channel adapter, is transferred to the destination host. (Virtual Fibre Channel is also a new feature of Hyper-V. For more information, see “Virtual Fibre Channel in Hyper-V”). The following figure shows this stage.[Click]Bringing the virtual machine online on the destination server: In this stage of a live migration, the destination server has the up-to-date working set for the virtual machine and access to any storage that the VM uses. At this time, the VM resumes operation.Network cleanup: In the final stage of a live migration, the migrated virtual machine runs on the destination server. At this time, a message is sent to the network switch, which causes the switch to obtain the new MAC addresses of the migrated virtual machine so that network traffic to and from the VM can use the correct switch port.The live migration process completes in less time than the TCP time-out interval for the virtual machine that is being migrated. TCP time-out intervals vary based on network topology and other factors.
NOTE: This slide is animated and has 3 clicksNot only can we live migrate a virtual machine between two physical hosts, Hyper‑V in Windows Server 2012 introduces live storage migration, which lets you move virtual hard disks that are attached to a running virtual machine without downtime. Through this feature, you can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing your storage load. You can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage.When you move a running virtual machine’s virtual hard disks, Hyper‑V performs the following steps to move storage:Throughout most of the move operation, disk reads and writes go to the source virtual hard disk.[Click]After live storage migration is initiated, a new virtual hard disk is created on the target storage device. While reads and writes occur on the source virtual hard disk, the disk contents are copied to the new destination virtual hard disk.[Click]After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.[Click]After the source and destination virtual hard disks are synchronized, the virtual machine switches over to using the destination virtual hard disk.The source virtual hard disk is deleted.Just as virtual machines might need to be dynamically moved in a cloud data center, allocated storage for running virtual hard disks might sometimes need to be moved for storage load distribution, storage device servicing, or other reasons.[Additional information]Updating the physical storage that is available to Hyper‑V is the most common reason for moving a virtual machine’s storage. You also may want to move virtual machine storage between physical storage devices, at runtime, to take advantage of new, lower-cost storage that is supported in this version of Hyper‑V, such as SMB-based storage, or to respond to reduced performance that can result from bottlenecks in the storage throughput. Windows Server 2012 provides the flexibility to move virtual hard disks both on shared storage subsystems and on non-shared storage as long as a Windows Server 2012 SMB3 network shared folder is visible to both Hyper‑V hosts.You can add physical storage to either a stand-alone system or to a Hyper‑V cluster and then move the virtual machine’s virtual hard disks to the new physical storage while the virtual machines continue to run.Storage migration, combined with live migration, also lets you move a virtual machine between hosts on different servers that are not using the same storage. For example, if two Hyper‑V servers are each configured to use different storage devices and a virtual machine must be migrated between these two servers, you can use storage migration to a shared folder on a file server that is accessible to both servers and then migrate the virtual machine between the servers (because they both have access to that share). Following the live migration, you can use another storage migration to move the virtual hard disk to the storage that is allocated for the target server.You can easily perform the live storage migration using a wizard in Hyper‑V Manager or Hyper‑V cmdlets for Windows PowerShell.BenefitsHyper‑V in Windows Server 2012 lets you manage the storage of your cloud environment with greater flexibility and control while you avoid disruption of user productivity. Storage migration with Hyper‑V in Windows Server 2012 gives you the flexibility to perform maintenance on storage subsystems, upgrade storage appliance firmware and software, and balance loads as capacity is used without shutting down virtual machines.Requirements for live storage migrationWindows Server 2012.The Hyper‑V role.Virtual machines configured to use virtual hard disks for storage.
NOTE: This slide is animated and has 4 clicksWith Windows Server 2012 Hyper-V, you can also perform a “Shared Nothing” Live Migration where you can move a virtual machine, live, from one physical system to another even if they don’t have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use “Shared Nothing” Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters.As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machine’s storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device.[Click]While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated.This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts.[Click]After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage.After the virtual machine’s storage is migrated, the virtual machine migrates while it continues to run and provide network services. [Click]After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted.
NOTE: This slide is animated and has 3 clicks[Click]The first scenario we are going to talk about is how you can achieve greater densities and run more demanding workloads through the Scale and Performance improvements of Windows Server 2012 Hyper-V. Within your organization, as you virtualize more of your infrastructure you need to have a platform, a hypervisor, that can support your most demanding workloads.[Click]Also, as you adopt newer hardware, you will need to be able to utilize the advancements within the hardware to the fullest, without losing the capability of the existing investments in infrastructure you already have.[Click]We do this through new features and updates delivered with Windows Server 2012 Hyper-V like:Bigger, faster virtual machinesHardware offloadingNon-Uniform Memory Access (NUMA) support
Before Windows Server 2012Hyper‑V in Windows Server 2008 R2 supported configuring virtual machines with a maximum of four virtual processors and up to 64 GB of memory. However, IT organizations increasingly want to use virtualization when they deploy mission‑critical, tier-1 business applications. Large, demanding workloads such as online transaction processing (OLTP) databases and online transaction analysis (OLTA) solutions typically run on systems with 16 or more processors and demand large amounts of memory. For this class of workloads, more virtual processors and larger amounts of virtual machine memory are a core requirement.Hyper‑V in Windows Server 2012Hyper‑V in Windows Server 2012greatly expands support for host processors and memory. New features include support for up to 64 processors and 1 TB of memory for Hyper‑V guests, a new VHDX virtual hard disk format with larger disk capacity of up to 64 TB (see the section, “New virtual hard disk format“), and additional resiliency. These features help ensure that your virtualization infrastructure can support the configuration of large, high-performance virtual machines to support workloads that might need to scale up significantly.
Note: This slide is animated and has 1 clickDynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate memory between virtual machines that are running on a Hyper-V host. Improvements made within Windows Server 2012 Hyper-V includeMinimum memory setting – being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory settingHyper-V smart paging – which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressureMemory ballooning – the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needsRuntime configuration – the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without requiring a reboot.Because a memory upgrade requires shutting down the virtual machine, a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. For example, consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM. Because of an increase in the size of the databases, the virtual machine now requires more memory. In Windows Server 2008 R2 with SP1, you must shut down the virtual machine to perform the upgrade, which requires planning for downtime and decreasing business productivity. With Windows Server 2012, you can apply that change while the virtual machine is running.[Click]As memory pressure on the virtual machine increases, an administrator can change the maximum memory value of the virtual machine, while it is running and without any downtime to the VM. Then, the Hot-Add memory process of the VM will ask for more memory and that memory is now available for the virtual machine to use.
Note: This slide is animated and has 2 clicksHyper-V Smart Paging is a memory management technique that uses disk resources as additional, temporary memory when more memory is required to restart a virtual machine. This approach has both advantages and drawbacks. It provides a reliable way to keep the virtual machines running when no physical memory is available. However, it can degrade virtual machine performance because disk access speeds are much slower than memory access speeds.To minimize the performance impact of Smart Paging, Hyper-V uses itonly when all of the following occur:The virtual machine is being restarted.No physical memory is available.No memory can be reclaimed from other virtual machines that are running on the host.Hyper-V Smart Paging is not used when:A virtual machine is being started from an off state (instead of a restart).Oversubscribing memory for a running virtual machine would result.A virtual machine is failing over in Hyper-V clusters.Hyper-V continues to rely on internal guest paging when host memory is oversubscribed because it is more effective than Hyper-V Smart Paging. With internal guest paging, the paging operation inside virtual machines is performed by Windows Memory Manager. Windows Memory Manager has more information than does the Hyper-V host about memory use within the virtual machine, which means it can provide Hyper-V with better information to use when it chooses the memory to be paged. Because of this, internal guest paging incurs less overhead to the system than Hyper-V Smart Paging.In this example, we have multiple VMs running, and we are restarting the last virtual machine. Normally, that VM would be using some amount of memory between the Minimum and Maximum values. In this case, the Hyper-V host is running fairly loaded and there isn’t enough memory available to give the virtual machine all of the startup value needed to boot.[Click]When this occurs, a Hyper-V Smart Paging file is created for the VM to give it enough RAM to be able to start.[Click]After some time, the Hyper-V host will use the Dynamic Memory techniques like ballooning to pull the RAM away from this or other virtual machines to free up enough RAM to bring all of the Smart Paging contents back off of the disk.
Windows Server 2012 Hyper‑V supports NUMA in a virtual machine. What is NUMA?NUMA, or Non-Uniform Memory Access, refers to a computer architecture in multiprocessor systems in which the time required for a processor to access memory depends on the memory’s location relative to the processor. With NUMA, a processor can access local memory (memory attached directly to the processor) faster than it can access remote memory (memory that is local to another processor in the system). Modern operating systems and high-performance applications such as SQL Server have developed optimizations to recognize the system’s NUMA topology and consider NUMA when they schedule threads or allocate memory to increase performance.Guest NUMAProjecting a virtual NUMA topology onto a virtual machine provides optimal performance and workload scalability in large virtual machine configurations. It does this by allowing the guest operating system and applications such as SQL Server to take advantage of their inherent NUMA performance optimizations (for example, making intelligent NUMA decisions about thread and memory allocation). The default virtual NUMA topology projected into a virtual machine running Hyper‑V is optimized to match the host’s NUMA topology, as shown in the figure.
Note:This slide is animated and has 2 clicksThe failure of an individual Hyper-V port or virtual network adapter can cause a loss of connectivity for a virtual machine. Using multiple virtual network adapters in a Network Interface Card (NIC) Teaming solution can prevent connectivity loss and, when multiple adapters are connected, multiply throughput.To increase reliability and performance in virtualized environments, Windows Server 2012 includes built-in support for NIC Teaming-capable network adapter hardware. Although NIC Teaming in Windows Server 2012 is not a Hyper-V feature, it is important for business-critical Hyper-V environments because it can provide increased reliability and performance for virtual machines. NIC Teaming is also known as “network adapter teaming technology” and “load balancing failover” (LBFO).NIC Teaming in Windows Server 2012 lets a virtual machine have virtual network adapters that are connected to more than one virtual switch and still have connectivity even if the network adapter under that virtual switch is disconnected. This is particularly important when working with features such as SR-IOV traffic, which does not go through the Hyper-V Extensible Switch and thus cannot be protected by a network adapter team that is under a virtual switch. With the virtual machine teaming option, you can set up two virtual switches, each connected to its own SR-IOV–capable network adapter. NIC Teaming then works in one of the following ways:Each virtual machine can install a virtual function from one or both SR-IOV network adapters and, if a network adapter disconnection occurs, fail over from the primary virtual function to the back-up virtual function.Each virtual machine may have a virtual function from one network adapter and a non-virtual function interface to the other switch. If the network adapter associated with the virtual function becomes disconnected, the traffic can fail over to the other switch without losing connectivity.Because failover between network adapters in a virtual machine might result in traffic being sent with the MAC address of the other interface, each virtual switch port associated with a virtual machine using NIC Teaming must be set to permit MAC spoofing.The Windows Server 2012 implementation of NIC Teaming supports up to 32 network adapters in a team. As shown in the following figure, the Hyper-V Extensible Switch can take advantage of the native provider support for NIC Teaming, allowing high availability and load balancing across multiple physical network interfaces.[Click]As you lose one of the NICs within the team…[Click]The network traffic that was going through that adapter will now flow through one of the remaining adapters within the team.
Example (see figure)In this scenario, Contoso Ltd. is a service provider that provides cloud services to businesses that need them. Blue Corp and Yellow Corp are two companies that want to move their Microsoft SQL Server infrastructures into the Contoso cloud, but they want to maintain their current IP addressing. With the new network virtualization feature of Hyper‑V in Windows Server 2012, Contoso can do this, as shown in the figure.
Importing a virtual machine from one physical host to another can expose file incompatibilities and other unforeseen complications. Administrators often think of a virtual machine as a single, stand-alone entity that they can move around to address their operational needs. In reality, a virtual machine consists of several parts:Virtual hard disks, stored as files in the physical storage.Virtual machine snapshots, stored as a special type of virtual hard disk file. The saved state of the different, host-specific devices.The memory file, or snapshot, for the virtual machine.The virtual machine configuration file, which organizes the preceding components and arranges them into a working virtual machine.Each virtual machine and each snapshot that is associated with it use unique identifiers. Additionally, virtual machines store and use some host-specific information, such as the path that identifies the location for virtual hard disk files. When Hyper‑V starts a virtual machine, it undergoes a series of validation checks before being started. Problems such as hardware differences that might exist when a virtual machine is imported to another host can cause these validation checks to fail. That, in turn, prevents the virtual machine from starting. Windows Server 2012 includes an Import wizard that helps you quickly and reliably import virtual machines from one server to another.The Import Wizard for virtualization:Detects and fixes problems. Hyper‑V in Windows Server 2012 introduces a new Import Wizard that is designed to detect and fix more than 40 different types of incompatibilities. You don’t have to worry ahead of time about the configuration that’s associated with physical hardware, such as memory, virtual switches, and virtual processors. The Import Wizard guides you through the steps to resolve incompatibilities when you import the virtual machine to the new host. Doesn’t require the virtual machine to be exported. You no longer need to export a virtual machine to be able to import it. You can simply copy a virtual machine and its associated files to the new host and then use the Import Wizard to specify the location of the files. This “registers” the virtual machine with Hyper‑V and makes it available for use. The flowchart shows the Import Wizard process.When you import a virtual machine, the wizard does the following:Creates a copy of the virtual machine configuration file. This is created as a precaution in case an unexpected restart occurs on the host, such as from a power outage.Validates hardware. Information in the virtual machine configuration file is compared to hardware on the new host.Compiles a list of errors. This list identifies what needs to be reconfigured and determines which pages appear next in the wizard.Displays the relevant pages, one category at a time. The wizard identifies incompatibilities to help you reconfigure the virtual machine so it’s compatible with the new host.Removes the copy of the configuration file. After the wizard does this, the virtual machine is ready to start.BenefitsThe new Import Wizard is a simpler, better way to import or copy virtual machines. The wizard detects and fixes potential problems, such as hardware or file differences that might exist when a virtual machine is imported to another host. As an added safety feature, the wizard creates a temporary copy of a virtual machine configuration file in case an unexpected restart occurs on the host, such as from a power outage. The Windows PowerShell cmdlets for importing virtual machines let you automate the process.RequirementsImport wizard requirements are:Two installations of Windows Server 2012 with the Hyper‑V role installed.A computer that has processor support for hardware virtualization.A virtual machine.A user account that belongs to the local Hyper‑V administrators group.
With the evolution of storage systems, and the ever-increasing reliance on virtualized enterprise workloads, the VHD format of Windows Server needed to also evolve. The new format is better suited to address the current and future requirements for running enterprise-class workloads, specifically:Where the size of the VHD is larger then 2,040 GB. To reliably protect against issues for dynamic and differencing disks during power failures. To prevent performance degradation issues on the new, large-sector physical disks.Hyper‑V in Windows Server 2012 contains an update to the VHD format, called VHDX, that has much larger capacity and additional resiliency. VHDX supports up to 64 terabytes of storage. It also provides additional protection from corruption from power failures by logging updates to the VHDX metadata structures and prevents performance degradation on large-sector physical disks by optimizing structure alignment.Technical descriptionThe VHDX format’s principal new features are:Support for virtual hard disk storage capacity of up to 64 terabytes.Protection against corruption during power failures by logging updates to the VHDX metadata structures. The format contains an internal log that is used to capture updates to the metadata of the virtual hard disk file before being written to its final location. In case of a power failure, if the write to the final destination is corrupted, then it is played back from the log to promote consistency of the virtual hard disk file.Optimal structure alignment of the virtual hard disk format to suit large sector disks. If unaligned I/Os are issued to these disks, an associated performance penalty is caused by the Read-Modify-Write cycles that are required to satisfy these I/Os. The structures in the format are aligned to help ensure that are no unaligned I/Os exist.The VHDX format also provides the following features:Larger block sizes for dynamic and differential disks, which lets these disks attune to the needs of the workload.A 4-KB logical sector virtual disk that results in increased performance when applications and workloads that are designed for 4-KB sectors use it.The ability to store custom metadata about the file that you might want to record, such as operating system version or patches applied.Efficiency (called trim) in representing data, which results in smaller files and lets the underlying physical storage device reclaim unused space. (Trim requires pass-through or SCSI disks and trim-compatible hardware.)The figure illustrates the VHDX hard disk format.As you can see in the preceding figure, most of the structures are large allocations and are MB aligned. This alleviates the alignment issue that is associated with virtual hard disks. The different regions of the VHDX format are as follows:Header region. The header region is the first region of the file and identifies the location of the other structures, including the log, block allocation table (BAT), and metadata region. The header region contains two headers, only one of which is active at a time, to increase resiliency to corruptions.Intent log. The intent log is a circular ring buffer. Changes to the VHDX metastructures are written to the log before they are written to the final location. If corruption occurs during a power failure while an update is being written to the actual location, on the subsequent open, the change is applied again from the log, and the VHDX file is brought back to a consistent state. The log does not track changes to the payload blocks, so it does not protect data contained within them.Data region. The BAT contains entries that point to both the user data blocks and sector bitmap block locations within the VHDX file. This is an important difference from the VHD format because sector bitmaps are aggregated into their own blocks instead of being appended in front of each payload block.Metadata region. The metadata region contains a table that points to both user-defined metadata and virtual hard disk file metadata such as block size, physical sector size, and logical sector size.Hyper‑V in Windows Server 2012 also introduces support that lets VHDX files be more efficient when they represent that data within it. Because the VHDX files can be large, based on the workload they are supporting, the space they consume can grow quickly. Currently, when applications delete content within a virtual hard disk, the Windows storage stack in both the guest operating system and the Hyper‑V host have limitations that prevent this information from being communicated to the virtual hard disk and the physical storage device. This contains the Hyper‑V storage stack from optimizing the space used and prevents the underlying storage device from reclaiming the space previously occupied by the deleted data.In Windows Server 2012, Hyper‑V now supports unmap notifications, which lets VHDX files be more efficient in representing that data within it. This results in smaller files size, which lets the underlying physical storage device reclaim unused space.BenefitsVHDX, which is designed to handle current and future workloads, has a much larger storage capacity than the earlier formats and addresses the technological demands of evolving enterprises. The VDHX performance-enhancing features make it easier to handle large workloads, protect data better during power outages, and optimize structure alignments of dynamic and differential disks to prevent performance degradation on new, large-sector physical disks.RequirementsTo take advantage of the new version of the new VHDX format, you need the following:Windows Server 2012 or Windows 8The Hyper‑V server roleTo take advantage of the trim feature, you need the following:VHDX-based virtual disks connected as virtual SCSI devices or as directly attached physical disks (sometimes referred to as pass-through disks). This optimization also is supported for natively attached VHDX-based virtual disks.Trim-capable hardware.
Note: This slide has 2 Clicks for animation to describe how live migration works when you use Virtual Fibre Channel in the VM.Current situationYou need your virtualized workloads to connect to your existing storage arrays with as little trouble as possible. Many enterprises have already invested in Fibre Channel SANs, deploying them in their data centers to address their growing storage requirements. These customers often want the ability to use this storage from within their virtual machines instead of having it only accessible from and used by the Hyper‑V host.Virtual Fibre Channel for Hyper‑V, a new feature of Windows Server 2012, provides Fibre Channel ports within the guest operating system, which lets you connect to Fibre Channel directly from within virtual machines.With Windows Server 2012Virtual Fibre Channel support includes the following:Unmediated access to a SAN. Virtual Fibre Channel for Hyper‑V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) associated with a virtual machine. Hyper‑V lets you use Fibre Channel SANs to virtualize workloads that require direct access to SAN logical unit numbers (LUNs). Fibre Channel SANs also allow you to operate in new scenarios, such as running the Windows Failover Cluster Management feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage.A hardware-based I/O path to the Windows software virtual hard disk stack. Mid-range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. Virtual Fibre Channel presents an alternative, hardware-based I/O path to the Windows software virtual hard disk stack. This path lets you use the advanced functionality of your SANs directly from Hyper‑V virtual machines. For example, Hyper‑V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume Shadow Copy Service (VSS) provider from within a Hyper‑V virtual machine.N_Port ID Virtualization (NPIV). NPIV is a Fibre Channel facility that allows multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. Virtual Fibre Channel for Hyper‑V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the host’s physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual host bus adapter (HBA) is created inside a virtual machine. When the virtual machine stops running on the host, the NPIV port is removed.A single Hyper‑V host connected to different SANs with multiple Fibre Channel ports. Hyper‑V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper‑V host is connected to different SANs via multiple Fibre Channel ports. A virtual SAN defines a named group of physical Fibre Channel ports that are connected to the same physical SAN. For example, assume a Hyper‑V host is connected to two SANs—a production SAN and a test SAN. The host is connected to each SAN through two physical Fibre Channel ports. In this example, you might configure two virtual SANs—one named “Production SAN” that has two physical Fibre Channel ports connected to the production SAN and one named “Test SAN” that has two physical Fibre Channel ports connected to the test SAN. You can use the same technique to name two separate paths to a single storage target.Up to four virtual Fibre Channel adapters on a virtual machine. You can configure as many as four virtual Fibre Channel adapters on a virtual machine and associate each one with a virtual SAN. Each virtual Fibre Channel adapter is associated with one WWN address, or two WWN addresses to support live migration. Each WWN address can be set automatically or manually.MPIO functionality. Hyper‑V in Windows Server 2012 can use the multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. You can use MPIO functionality with Fibre Channel in the following ways:Virtualize workloads that use MPIO. Install multiple Fibre Channel ports in a virtual machine, and use MPIO to provide highly available connectivity to the LUNs accessible by the host.Configure multiple virtual Fibre Channel adapters inside a virtual machine, and use a separate copy of MPIO within the guest operating system of the virtual machine to connect to the LUNs the virtual machine can access. This configuration can coexist with a host MPIO setup.Use different device-specific modules (DSMs) for the host or each virtual machine. This approach allows live migration of the virtual machine configuration, including the configuration of DSM and connectivity between hosts and compatibility with existing server configurations and DSMs.Live migration support with virtual Fibre Channel in Hyper‑V: To support live migration of virtual machines across hosts running Hyper‑V while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper‑V automatically alternates between the Set A and Set B WWN addresses during a live migration. This helps to ensure that all LUNs are available on the destination host before the migration and minimal downtime occurs during the migration.Requirements for Virtual Fibre Channel in Hyper‑V:One or more installations of Windows Server 2012 with the Hyper‑V role installed. Hyper‑V requires a computer with processor support for hardware virtualization.A computer with one or more Fibre Channel HBAs, each with an updated HBA driver that supports Virtual Fibre Channel. Updated HBA drivers are included with the in-box HBA drivers for some models. Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system.Connection only to data LUNs. Storage accessed through a Virtual Fibre Channel connected to a LUN can’t be used as boot media.
The Microsoft Certified Solutions Associate (MCSA): Windows Server 2012 certification shows that you have the primary set of Windows Server skills that are relevant across multiple solution areas in a business environment. The MCSA: Windows Server 2012 certification is a prerequisite for earning the MCSE: Server Infrastructure certification, the MCSE: Desktop Infrastructure certification, the MCSE: Private Cloud certification.CONFIDENTIAL – the MCSE: Messaging, MCSE: Communication, and MCSE: SharePoint certifications will also be based on the MCSA: Windows Server 2012 certification and will be released later this fiscal year.Course 20410 is currently availableCourse 20411 will be available at the end of September 2012Course 20412 will be available at the end of September 2012
MCSE: Server InfrastructureIT departments are becoming more inundated with creating a highly dynamic, available, and cost-effective infrastructure solution that allows growth into the world of cloud-optimized IT. Deliver a cloud-optimized datacenter.Manage a modern infrastructure solution across technologies.Create the power of many servers, with the simplicity of one.Transform IT operations and deliver new business value. The Microsoft Certified Solutions Expert (MCSE): Server Infrastructure certification validates your ability to build Server infrastructure solutions that provide a comprehensive platform on which to build and run your infrastructure. With the know-how of essential services ranging from identity management, systems management, virtualization, storage, and networking, you have the tools needed to run a highly efficient and modern datacenter. The MCSE: Server Infrastructure certification empowers you to go beyond virtualization and deliver the essential services for a highly efficient and modern datacenter. Exams 413 and 414 are currently available to everyone in paid beta.Course 20413 is scheduled to be available by the end of the calendar yearCourse 20414 is scheduled to be available by the end of the calendar yearExam 70-413 Plan and Deploy a Server InfrastructureDesign an automated server installation strategy. This objective may include but is not limited to: Design considerations including images and bare metal/virtual deployment; design a server implementation using Windows Assessment and Deployment Kit (ADK); design a virtual server deploymentPlan and implement a server deployment infrastructure. This objective may include but is not limited to: Configure multicast deployment; configure multi-site topology and distribution points; configure a multi-server topology; configure autonomous and replica Windows Deployment Services (WDS) serversPlan and implement server upgrade and migration. This objective may include but is not limited to: Plan for role migration; migrate server roles; migrate servers across domains and forests; design a server consolidation strategy; plan for capacity and resource optimizationPlan and deploy Virtual Machine Manager services. This objective may include but is not limited to: Design Virtual Machine Manager service templates; define operating system profiles; configure hardware and capability profiles; manage services; configure image and template libraries; manage logical networksPlan and implement file and storage services. This objective may include but is not limited to: Planning considerations include iSCSI SANs, Fibre Channel SANs, Virtual Fibre Channel, storage spaces, storage pools, and data de-duplication; configure the iSCSI Target server; configure the Internet Storage Name server (iSNS); configure Network File System (NFS); install Device Specific Modules (DSMs) Design and Implement Network Infrastructure Services Design and maintain a Dynamic Host Configuration Protocol (DHCP) solution. This objective may include but is not limited to: Design considerations including a highly available DHCP solution including split scope, DHCP failover, and DHCP failover clustering, DHCP interoperability, and DHCPv6; implement DHCP filtering; implement and configure a DHCP management pack; maintain a DHCP databaseDesign a name resolution solution strategy. This objective may include but is not limited to: Design considerations including secure name resolution, DNSSEC, DNS Socket Pool, cache locking, disjoint namespaces, DNS interoperability, migration to application partitions, IPv6, Single-Label DNS Name Resolution, zone hierarchy, and zone delegationDesign and manage an IP address management solution. This objective may include but is not limited to: Design considerations including IP address management technologies including IPAM, Group Policy based, and manual provisioning, and distributed vs. centralized placement; configure role-based access control; configure IPAM auditing; migrate IPs; manage and monitor multiple DHCP and DNS servers; configure data collection for IPAM Design and Implement Network Access ServicesDesign a VPN solution. This objective may include but is not limited to: Design considerations including certificate deployment, firewall configuration, client/site to site, bandwidth, protocol implications, and VPN deployment configurations using Connection Manager Administration Kit (CMAK)Design a DirectAccess solution. This objective may include but is not limited to: Design considerations including topology, migration from Forefront UAG, DirectAccess deployment, and enterprise certificatesImplement a scalable remote access solution. This objective may include but is not limited to: Configure site-to-site VPN; configure packet filters; implement packet tracing; implement multi-site Remote Access; configure Remote Access clustered with Network Load Balancing (NLB); configure DirectAccessDesign a network protection solution. This objective may include but is not limited to: Design considerations including Network Access Protection (NAP) enforcement methods for DHCP, IPSec, VPN, and 802.1x, capacity, placement of servers, firewall, Network Policy Server (NPS), and remediation networkImplement a network protection solution. This objective may include but is not limited to: Implement multi-RADIUS deployment; configure NAP enforcement for IPSec and 802.1x; deploy and configure the Endpoint Protection client; create anti-malware and firewall policies; monitor for compliance Design and Implement an Active Directory Infrastructure (Logical)Design a forest and domain infrastructure. This objective may include but is not limited to: Design considerations including multi-forest architecture, trusts, functional levels, domain upgrade, domain migration, forest restructure, and Hybrid Cloud servicesImplement a forest and domain infrastructure. This objective may include but is not limited to: Configure domain rename; configure Kerberos realm trusts; implement a domain upgrade; implement a domain migration; implement a forest restructure; deploy and manage a test forest including synchronization with production forestsDesign a Group Policy strategy. This objective may include but is not limited to: Design considerations including inheritance blocking, enforced policies, loopback processing, security, and WMI filtering, site-linked Group Policy Objects (GPOs), slow-link processing, group strategies, organizational unit (OU) hierarchy, and Advanced Group Policy Management (AGPM)Design an Active Directory permission model. This objective may include but is not limited to: Design considerations including Active Directory object security and Active Directory quotas; customize tasks to delegate in Delegate of control wizard; deploy administrative tools on the client computer; delegate permissions on administrative users (AdminSDHolder); configure Kerberos delegation Design and Implement an Active Directory Infrastructure (Physical)Design an Active Directory sites topology. This objective may include but is not limited to: Design considerations including proximity of domain controllers, replication optimization, and site link; monitor and resolve Active Directory replication conflictsDesign a domain controller strategy. This objective may include but is not limited to: Design considerations including global catalog, operations master roles, Read-Only Domain Controllers (RODCs), partial attribute set, and domain controller cloningDesign and implement a branch office infrastructure. This objective may include but is not limited to: Design considerations including RODC, Universal Group Membership Caching (UGMC), global catalog, DNS, DHCP, and BranchCache; implement confidential attributes; delegate administration; modify filtered attributes set; configure password replication policy; configure hash publication Exam 414Manage and Maintain a Server InfrastructureDesign an administrative model. This objective may include but is not limited to: Design considerations including user rights, built-in groups, and end-user self-service portal; design a delegation of administration structure for Microsoft System Center 2012Design a monitoring strategy. This objective may include but is not limited to: Design considerations including monitoring servers using Audit Collection Services (ACS), performance monitoring, centralized monitoring, and centralized reporting; implement and optimize System Center 2012 - Operations Manager management packs; plan for monitoring Active DirectoryDesign an updates infrastructure. This objective may include but is not limited to: Design considerations including Windows Server Update Services (WSUS), System Center 2012 - Configuration Manager, and cluster-aware updating; design and configure Virtual Machine Manager for software update management; update VDI desktop imagesImplement automated remediation. This objective may include but is not limited to: Create an Update Baseline in Virtual Machine Manager; implement a Desired Configuration Management (DCM) Baseline; implement Virtual Machine Manager integration with Operations Manager; configure Virtual Machine Manager to move a VM dynamically based on policy; integrate System Center 2012 for automatic remediation into your existing Enterprise Infrastructure Plan and Implement a Highly Available Enterprise Infrastructure Plan and implement failover clustering. This objective may include but is not limited to: Plan for multi-node and multi-site clustering; design considerations including redundant networks, network priority settings, resource failover and failback, heartbeat and DNS settings, Quorum configuration, and storage placement and replicationPlan and implement highly available network services. This objective may include but is not limited to: Plan for and configure Network Load Balancing (NLB); design considerations including fault-tolerant networking, multicast vs. unicast configuration, state management, and automated deployment of NLB using Virtual Machine Manager service templatesPlan and implement highly available storage solutions. This objective may include but is not limited to: Plan for and configure storage spaces and storage pools; design highly available, multi-replica DFS namespaces; plan for and configure multi-path I/O, including Server Core; configure highly available iSCSI Target and iSNS ServerPlan and implement highly available server roles. This objective may include but is not limited to: Plan for a highly available Dynamic Host Configuration Protocol (DHCP) Server, Hyper-V clustering, Continuously Available File Shares, and a DFS Namespace Server; plan for and implement highly available applications, services, and scripts using Generic Application, Generic Script, and Generic Service clustering rolesPlan and implement a business continuity and disaster recovery solution. This objective may include but is not limited to: Plan a backup and recovery strategy; planning considerations including Active Directory domain and forest recovery, Hyper-V replica, domain controller restore and cloning, and Active Directory object and container restore using authoritative restore and Recycle Bin Plan and Implement a Server Virtualization Infrastructure Plan and implement virtualization hosts. This objective may include but is not limited to: Plan for and implement delegation of virtualization environment (hosts, services, and VMs), including self-service capabilities; plan and implement multi-host libraries including equivalent objects; plan for and implement host resource optimization; integrate third-party virtualization platformsPlan and implement virtualization guests. This objective may include but is not limited to: Plan for and implement highly available VMs; plan for and implement guest resource optimization including smart page file, dynamic memory, and remoteFX; configure placement rules; create Virtual Machine Manager templatesPlan and implement virtualization networking. This objective may include but is not limited to: Plan for and configure Virtual Machine Manager logical networks; plan for and configure IP address and MAC address settings across multiple Hyper-V hosts including IP virtualization; plan for and configure virtual network optimization Plan and implement virtualization storage. This objective may include but is not limited to: Plan for and configure Hyper-V host storage including stand-alone and clustered setup using SMB 2.2 and CSV; plan for and configure Hyper-V guest storage including virtual Fibre Channel, iSCSI, and pass-through disks; plan for storage optimizationPlan and implement virtual guest movement. This objective may include but is not limited to: Plan for and configure live, SAN, and network migration between Hyper-V hosts; plan for and manage P2V and V2V Manage and maintain a server virtualization infrastructure. This objective may include but is not limited to: Manage dynamic optimization and resource optimization; manage Operations Manager integration using PRO Tips; automate VM software and configuration updates using service templates; maintain library updates Design and Implement Identity and Access Solutions Design a Certificate Services infrastructure. This objective may include but is not limited to: Design a multi-tier Certificate Authority (CA) hierarchy with offline root CA; plan for multi-forest CA deployment; plan for Certificate Enrollment Web Services; plan for network device enrollment; plan for certificate validation and revocation; plan for disaster recovery; plan for trust between organizationsImplement and manage a Certificate Services infrastructure. This objective may include but is not limited to: Configure and manage offline root CA; configure and manage Certificate Enrollment Web Services; configure and manage Network Device Enrollment Services; configure Online Certificates Status Protocol responders; migrate CA; implement administrator role separation; implement and manage trust between organizations; monitor CA healthImplement and manage certificates. This objective may include but is not limited to: Manage certificate templates; implement and manage deployment, validation, and revocation; manage certificate renewal including Internet-based clients; manage certificate deployment and renewal to network devices; configure and manage key archival and recoveryDesign and implement a federated identity solution. This objective may include but is not limited to: Plan for and implement claims-based authentication including planning and implementing Relying Party Trusts; plan for and configure Claims Provider Trust rules; plan for and configure attribute stores including Active Directory Lightweight Directory Services (AD LDS); plan for and manage Active Directory Federation Services (AD FS) certificates; plan for Identity Integration with Cloud servicesDesign and implement Active Directory Rights Management Services (AD RMS). This objective may include but is not limited to: Plan for highly available AD RMS deployment; manage AD RMS Service Connection Point; plan for and manage AD RMS client deployment; manage Trusted User Domains; manage Trusted publishing domains; manage Federated Identity support; manage Distributed and Archived Rights Policy templates; configure Exclusion Policies; decommission AD RMS
With the proliferation of devices in the consumer market, the boundaries between work and life have blurred—making flexible, secure, reliable, and consistent access to corporate services mandatory for both traditional and virtualized environments. Enable the modern workstyle.Embrace the consumerization of IT. The MCSE: Desktop Infrastructure certification validates you can take advantage of the cost savings associated with deploying and managing desktops and devices. Be the hero in your organization. Enable a flexible workstyle by providing access from anywhere on any device while maintaining security and compliance with your skills in desktop virtualization, remote desktop services and application virtualization. The Microsoft Certified Solutions Expert (MCSE): Desktop Infrastructure proves you can enable flexible, reliable and consistent access to corporate services across devices.Exams 415 and 416 are currently available to everyone in paid beta.Course 20415 is scheduled to be available by the end of the calendar yearCourse 20416 is scheduled to be available by the end of the calendar year