SlideShare una empresa de Scribd logo
1 de 75
Descargar para leer sin conexión
1
Who am I, who’s in my team, what are my resposibilities, talk about reference
architectures and so on.
2
Sports foto courtesy of Alfred Cop.
3
4
Our starting point for today is the standard model Citrix uses to describe a virtual
desktop.
In this model , several layers and involved in handling the provisioning of the virtual
desktops.
The Desktop Layer describes the actual golden image, containing or running the apps
the user is going to use.
A best practice is to make this image as lean and mean as you can, before rolling it
out, to increase scalability and user density on the system.
The Control and Access layer contains the management tools we use to actually
perform the provisioning operations. Studio is the main interface from which we
control machine creation and assignment of users to these desktops, in combination
with other consoles like the PVS management console if we chose to use that as our
main provisioning tool.
The hypervisor in this model can be pretty much anything we would like. New kid on
the block is the Nutanix Acropolis Hypervisor, or AHV.
Finally at the bottom we find the compute and storage layer, which can be designed
and built in many ways, but not every combination of hardware and storage is
suitable for high scale virtual desktop environments.
5
A little history tour.
Automated provisioning has not always been a part of the portfolio. A long long time
ago , physicial deployments of XenApp ruled the world, leveraging the IMA
architecture.
The acquisition of Ardence, in 2006, brought a tool to the Citrix world that solved a
big problem for XenApp customers. No longer did they need to manually or semi-
automatically deploy physical installation of Windows servers and scripted installs of
XenApp, but servers could be ran from a single image. Later , when VDI was starting
to become a big new thing , Citrix redesigned the IMA architecture to allow for larger
scale of environments, which is now called the Flexcast Management Architecture.
This architecture allowed different types of desktop and apps delivery to be mixed
and managed from a single console. Big part of FMA was the introduction of a new
provisioning technique called Machine Creation Services.
First iteration of this was focused on VDI, and later Citrix brought XenApp over to the
FMA architecture as well.
6
Finally we now also bring the Nutanix Acropolis Hypervisor into the FMA supported
hypervisors. This means that AHV is now fully supported to not only run VM’s, but
also leverage MCS and PVS as provisioning techniques.
7
If we compare PVS and MCS on a high level there are 5 comparisions we can make.
First of all the platform: PVS works for both virtual and physical workloads. MCS only
supports a hypervisor based deployment.
Delivery of the master image will go over the network (streamed) , while MCS
leverages storage as it’s main delivery.
The source format used with PVS is a VHDX file of the master VM, which means a
conversion has to take place first. MCS leverages hypervisor based snapshots.
The storage layer that’s beeing used to store the writes is local disk focused in case of
PVS, while with MCS it leverages a datastore approach. These datastores can however
be locally attached.
Finally the infrastructure needed; PVS is made up of a separate architecture of one or
more PVS servers, and might need a separate network for streaming. MCS is fully
integrated into the XenDesktop Controllers.
8
Depending on your needs, you can choose between MCS and PVS.
MCS offers three modes, but they are all for virtual desktops only.
Pooled: set of non-persistent VM’s, all sharing the same master image, shared across
multiple users.
Pooled with PVS: set of non-persistent VM’s shared across multiple users, but with a
Personal vDisk attached.
Dedicated: Set of persistent VM’s spun of from a master image.
9
Pooled VM’s can be random (each time you get a new pristine VM, but it can be a
different one than you had previously), or Static which means you log into the same
named VM each time. It will still be cleaned on restart.
PVS based desktops only have one mode, which is assigned on first use, and from
that point on you will log into the same desktop each time, because it also
personalises the PVD which is fixed to the VM.
Dedicated VM’s can be pre-assigned or set to first use, but will in essence require
“normal” pc management in terms of updates, patches and software distribution,
since it will be a persistent VM after it’s deployment. Once you update the master
image, only new desktops spun off the master will run the newer version of the
master.
PVS allows you to stream desktops to either physical or virtual machines. The most
used mode of PVS is standard image mode, which means non-persistent. Persistent
image mode is mostly used to update the master.
10
Image from Pixabay.com
https://pixabay.com/en/vehicle-chrome-technology-193213/
11
Machine Creation Services mechanics:
MCS is fully integrated into Citrix Studio and does not require a separate installer or
configuration. It’s there on each XenDesktop Controller.
MCS itself is a VM creation framework that can be learned to understand different
hypervisors. No code is actually placed on the hypervisor.
The method used to provision VMs and link disks to it, differs per hypervisor, but MCS
fully controlls the creation of the VM’s and fires of the commands to the hypervisor’s
management platform or API’s.
12
(Image obtained and edited from http://knowyourmeme.com/memes/philosoraptor)
So you could say the XenDesktop controllers actually speak multiple languages.
First of all it understands how to speak to Vmware ESX, and it will do so by talking to
vCenter.
Hyper-V will be contacted through SCVMM, XenServer is beeing addressed through
XAPI.
Finally, Nutanix AHV is accessed through the Nutanix AHV plugin, which is in itself
beeing accessed through the Provisioning SDK
13
MCS itself runs as 2 services on every XenDesktop Controller.
First we have the Citrix Machine Creation Service, which interfaces with the chosen
hypervisor.
The Citrix AD identity service talks with Active Directory and manages the domain
accounts and domain memberships of the VMs.
On the virtual desktops the Virtual Desktop Agent also contains a service, which is the
Machine Identity Service.
This services manages the VM uniqueness.
14
MCS will enable central image management for the admin.
In a pooled static scenario it works as follows:
First of all you create your Golden Master VM, install your apps and the Citrix
XenDesktop VDA.
With this Golden Master, you create a Machine Catalog.
When you run through the wizard in Studio, you will be asked which VM and which
snapshot to use (it will create a snap if none is present).
This snapshot is flattened and copied to each configured datastore (in the host
connection details).
When the image has been copied, a preparation procedure is started to create the
identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.
When cloned, each VM is attached to the master image (read only) and to a diff disk
(writable) and an ID disk (read only). The diff disk can grow as writes happen, but the
ID disk is a fixed 16 MB size.
15
MCS will enable central image management for the admin.
In a pooled static scenario it works as follows:
First of all you create your Golden Master VM, install your apps and the Citrix
XenDesktop VDA.
With this Golden Master, you create a Machine Catalog.
When you run through the wizard in Studio, you will be asked which VM and which
snapshot to use (it will create a snap if none is present).
This snapshot is flattened and copied to each configured datastore (in the host
connection details).
When the image has been copied, a preparation procedure is started to create the
identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.
When cloned, each VM is attached to the master image (read only) and to a diff disk
(writable) and an ID disk (read only). The diff disk can grow as writes happen, but the
ID disk is a fixed 16 MB size.
16
MCS will enable central image management for the admin.
In a pooled static scenario it works as follows:
First of all you create your Golden Master VM, install your apps and the Citrix
XenDesktop VDA.
With this Golden Master, you create a Machine Catalog.
When you run through the wizard in Studio, you will be asked which VM and which
snapshot to use (it will create a snap if none is present).
This snapshot is flattened and copied to each configured datastore (in the host
connection details).
When the image has been copied, a preparation procedure is started to create the
identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.
When cloned, each VM is attached to the master image (read only) and to a diff disk
(writable) and an ID disk (read only). The diff disk can grow as writes happen, but the
ID disk is a fixed 16 MB size.
17
MCS will enable central image management for the admin.
In a pooled static scenario it works as follows:
First of all you create your Golden Master VM, install your apps and the Citrix
XenDesktop VDA.
With this Golden Master, you create a Machine Catalog.
When you run through the wizard in Studio, you will be asked which VM and which
snapshot to use (it will create a snap if none is present).
This snapshot is flattened and copied to each configured datastore (in the host
connection details).
When the image has been copied, a preparation procedure is started to create the
identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.
When cloned, each VM is attached to the master image (read only) and to a diff disk
(writable) and an ID disk (read only). The diff disk can grow as writes happen, but the
ID disk is a fixed 16 MB size.
18
MCS will enable central image management for the admin.
In a pooled static scenario it works as follows:
First of all you create your Golden Master VM, install your apps and the Citrix
XenDesktop VDA.
With this Golden Master, you create a Machine Catalog.
When you run through the wizard in Studio, you will be asked which VM and which
snapshot to use (it will create a snap if none is present).
This snapshot is flattened and copied to each configured datastore (in the host
connection details).
When the image has been copied, a preparation procedure is started to create the
identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned.
When cloned, each VM is attached to the master image (read only) and to a diff disk
(writable) and an ID disk (read only). The diff disk can grow as writes happen, but the
ID disk is a fixed 16 MB size.
19
If you want to do updates, all it takes it to perform the update is to boot up the
master, perform the change and choose the “Update Catalog” function in Studio.
It then creates a snapshot and copies a new flattened image to the datastore(s).
Depending on the options you choose you can have all VM’s pointed to the new
image right away, or do it in a rolling fashion. You can also rollback later if you want.
When you roll out a new image, on restart of the VM’s, they are pointed to the new
version, and the diffs are cleared.
20
If you want to do updates, all it takes it to perform the update is to boot up the
master, perform the change and choose the “Update Catalog” function in Studio.
It then creates a snapshot and copies a new flattened image to the datastore(s).
Depending on the options you choose you can have all VM’s pointed to the new
image right away, or do it in a rolling fashion. You can also rollback later if you want.
When you roll out a new image, on restart of the VM’s, they are pointed to the new
version, and the diffs are cleared.
21
If you want to do updates, all it takes it to perform the update is to boot up the
master, perform the change and choose the “Update Catalog” function in Studio.
It then creates a snapshot and copies a new flattened image to the datastore(s).
Depending on the options you choose you can have all VM’s pointed to the new
image right away, or do it in a rolling fashion. You can also rollback later if you want.
When you roll out a new image, on restart of the VM’s, they are pointed to the new
version, and the diffs are cleared.
22
If you want to do updates, all it takes it to perform the update is to boot up the
master, perform the change and choose the “Update Catalog” function in Studio.
It then creates a snapshot and copies a new flattened image to the datastore(s).
Depending on the options you choose you can have all VM’s pointed to the new
image right away, or do it in a rolling fashion. You can also rollback later if you want.
When you roll out a new image, on restart of the VM’s, they are pointed to the new
version, and the diffs are cleared.
23
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
24
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
25
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
26
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
27
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
28
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
29
Now lets take a look at how Citrix Studio connects with all the different hypervisors.
Citrix Studio normally resides on a XenDesktop controller en interfaces with the
different services running on the host.
These services take care of the brokering, manage the hosting connection and do the
MCS related tasks we mentioned earlier.
Studio is not the only way to interface with the core of XenDesktop, Powershell
CMDlets are also available directly.
When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be
made highly available if you want to make sure
You can always manage your environment or have Studio manage the VMs, maintain
the idle pools etc.
Should vCenter go down, you will not lose the current sessions, and any VMs that
have already registered themselves with the broker are still available for login.
vCenter in turn does the actual power on/power off of the VMs and the tasks related
to provisioning.
To get the most your storage and use the benefits on thin provisioning, Citrix
recommends using NFS datastores connected to the hypevisor hosts.
VMWare will use its own vmdk disk format.
30
If you take a look at the NFS datastore, you will find each provisioned VM to be in its
own folder. Actually Vmware has the tidiest folder structure, when you compare it to
other hypervisors, and it’s easy to see which files do what.
The Master vdisk is placed in a separate folder in the root of the datastore(s)
Directly linked to a pooled static VM are two vmdk files.
The first one is the Identity disk, and will not exceed 16 MB of space. In practice its
about half of that.
Then the second disk is a delta.vmdk, but this is actually a redirector disk. More on
than in a few slides.
31
If you open the identity disk vmdk you can see it’s readable text.
It contains many variables that have been set by Studio, to help make the VM unique,
even though it has spun off a master disk.
These variables are picked up by the VDA.
Amongst others, you will find a desktop ID there, but also the Catalog ID the VM is a
part of and the licensing server this VDA ties to.
32
If you open the delta.vmdk thats directly linked to the VM (i.e. configured in the VMX
file) you will see it’s plain text as well.
In there you can see it is beeing linked to a parent vDisk which is actually the Master
image you created the catalog with.
You can see the name of the disk is the same as the Catalog.
Secondly the redirector points another delta.vmdk which is actually the writecache.
33
When you boot the VM, you will see more files beeing created, of which two files a
REDO logs.
The delta.REDO is the actual writecache, and REDO files will be cleared on restart of
the VM.
Now these files are not the same as snapshots, since they only save the disk state and
not anything else that might have changed in the config of the VM, or it’s memory
and CPU state.
34
Should you copy a 1 GB file to the virtual desktop, you can actually see the write
cache grow.
The reason Studio sets up the writecache this way (using a redirector and REDO files )
is very smart because it will not have to do anything to clean out the writecache, it
just has to issue a restart of the VM (not a reboot)
35
Hyper-V’s architecture is similar for the Studio part of course, but Studio needs to
interface with System Center Virtual Machine manager to be able to communicate
with the Hyper-V hosts.
Just pointing Studio to SCVMM is not enough, you also need to install the SCVMM
Admin console on each XenDesktop Controller.
Same thing applies as with Vmware, you need to make sure SCVMM is highly
available to be able to have Studio manage the VM’s.
Hyper-V hosts prefer SMB datastores and they will use the VHDX format to provision
disks.
36
If you use XenServer, Studio can talk directly to the XenServer Poolmaster and needs
no management layer in between as the management layer for XenServer is more or
less embedded
Still you have to make sure the Poolmaster is always reachable.
When using XenServer, Studio again prefers NFS , and on this it will utilise the VHD file
format.
37
If you look at the file structure for XenServer, you will see that vdisks are created with
GUIDs as their names. It’s not clear from just the name or folder to hwat VM a disk
belongs.
To get that insight you’d have to go the command line route, and use “xe vm list” etc
to work out which is with. From the XenCenter GUI you can see what disks are
attached.
You also might notice the “Preparation” VM that’s beeing booted up during catalog
creating, during which it will do a mini sysprep and generate the identity disk info to
be used for the rest of the VM’s. The base disk itself will be copied from the Master
VM into each configured datastore.
38
Once you have the VM’s created you will see these pairs of the id and writecache
disks (a VHD chain is beeing built).
39
The last hypervisor we look at (and most certainly not the least!) , is the Nutanix
Acropolis Hypervisor.
AHV has been around for a while but we’ve only really started calling it AHV since
june last year when we released a new version of it at Nutanix .Next in Miami (which
will be held in Vegas in 2 weeks time btw).
We’re proud to announce full GUI based support for AHV in XenDesktop, which
includes the use of Machine Creation Services.
How does it work?
First of all you need XenDesktop 7.9, which has the latest version of the Provisioing
SDK installed on it.
Together with the Nutanix AHV plugin you install on every controller, you can now
have Studio talk natively to Acropolis.
Nutatix clusters are automatically highly available because of the distributed
architecture, so you only need to point Studio to the Nutanix cluster IP address and
you’re done.
It works the same way from that point on, and you create catalogs based on
snapshots, which will then lead to provisioned VM’s with ID disks and writecaches.
40
Under water we do a couple of things differently than the previsou 3 hypervisors.
We use a full clone approach, with copy on write functionality.
Each VM is linked to the master image, but will show up as the full vdisk, also the 16
MB ID disk is beeing attached.
These vDisks are thin provisioned, and while they show a usage of 10 GB in the above
example, the actual data usage will be much lower, since we deduplicate and
compress data on the storage level.
After every restart of the VM, the writecache disk is reset.
41
Here you see the actual logical files when looked at the datastore through a WinSCP
client, with the writecache disks at the top, and the ID disks at the bottom.
42
Image from Pixabay.com
https://pixabay.com/en/motor-engine-compartment-mustang-1182219/
So, lets dive into Provisioning Services.
43
VIDEO
Everyone remember this nice little Ardence video? It sure made an impact, since we
are still using this technology today, to literally deliver hundreds of thousands of
desktops to end user.
PVS is a streaming service and operates mostly on the network level.
It uses the exact same streaming method regardless of hypervisor, and as such does
not need to be adapted or learned new hypervisors.
As long as the hypervisors support PXE or boot iso methodes of booting the VM’s,
you’re good to go.
PVS actually intercepts the normal HDD bootprocess and redirects it over the
network to a shared vdisk.
The PVS servers you need actually only control the streaming of the vdisks, Studio is
still required to do the VM start/restart operations.
Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot
order of the VM’s.
Before you can use PVS, there is an image conversion process you need to perform.
44
VIDEO
Everyone remember this nice little Ardence video? It sure made an impact, since we
are still using this technology today, to literally deliver hundreds of thousands of
desktops to end user.
PVS is a streaming service and operates mostly on the network level.
It uses the exact same streaming method regardless of hypervisor, and as such does
not need to be adapted or learned new hypervisors.
As long as the hypervisors support PXE or boot iso methodes of booting the VM’s,
you’re good to go.
PVS actually intercepts the normal HDD bootprocess and redirects it over the
network to a shared vdisk.
The PVS servers you need actually only control the streaming of the vdisks, Studio is
still required to do the VM start/restart operations.
Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot
order of the VM’s.
Before you can use PVS, there is an image conversion process you need to perform.
45
Let’s take a look at PVS architecture.
PVS works with a separate infrastructure in addition to Studio, and also needs to be
sized correctly for the number of desktops it’s going to provision, and offcourse made
HA.
In practice you will always have more than one PVS server, and the number of
desktops you can stream per PVS server lies in the hundreds to a couple of thousand
depending on the PVS server’s specs.
A best practice is to use separate network segments or vlans to separate the (mostly
read-only) treaming traffic of the vdisks.
To allow a VM to read from a vdisk, it needs to be part of a Device Collection on the
PVS server, and the actual tie to the vdisk is done on MacAddresses.
This allows for quick swapping of vdisks, by just dragging and dropping.
A vdisk can be in 3 modes, of which Standard and Prviate mode are the most used.
Standard mode enables a 1 to many scenario, enables the writecache and makes the
vdisk read-only.
Private mode is mostly used to do updates to the vdisk, as it will make the vdisk
writeable, but only allows 1 VM to boot it.
There is a hybrid mode in PVS, but it is hardly ever used.
46
These vdisks are streamed to the VM’s, who can initiate the bootprocess by either
PXE or mounting a boot iso (BDM). PXE requires some extra DHCP settings (options
66 & 67 ) to be set, and might pose a challenge to use in an environment where more
services depend on PXE. BDM solves that problem, but needs more configuration on
the VM side, and the administraiton of the boot is.
The writes in PVS will go to the writecache, and this can be place on different
location. Most often used , is a local disk (the actual disk mounted to the VM, but this
can be configured as to be place on a local datastore).
Second option is to put the writecache in RAM, but this is a little tricky because RAM
is not as abundantly available as disk is, and when the RAM fills up, the VM will halt
or even blue screen. It’s the same thing that could happen when you run out of disk
space.
The 3 option is to place the writecache back on the PVS server, but this is hardly every
done as it will create serious bottlenecks and other management issues.
A 4th hybrid form has become more popular recently, as it first uses RAM, and when
that is full, it will overflow to disk. While it might sound great as it lowers IOPS
requirements at first, there are some downsides to it, because when the RAM
actually overflows, performance might go down. So sizing this correctly and keeping a
tight watch on it is very important.
47
The vDisk creation process is a bit more time comsuming than MCS since you need to
actually copy the entire disk contents to a VHD file using a wizard, after creating the
Master VM.
To be able to do this you need to install the Target Device tool on the VM as well as
the VDA. It is not part of the VDA.
When you do this, you can then run the Imaging Wizard to create the vdisk. The
Imaging Wizard also has some options to tune the VM by optimizing some settings,
but it is not very extensive.
Keep using the Best Practices guides and tools for that, that are available in the
community.
When you created the vdisk, you can then create a device collection (either manually
or using a wizard) an literally drag and drop a vdisk on top of it.
It makes the version management of the solution very easy. Switching back and forth
between vdisks only requires a reboot of the devices.
48
So how can we get the most out of these provisioning techniques.
49
So how can we get the most out of these provisioning techniques.
50
The most improtant thing is, to choose the right solution first.
We’ve created this little flow chart to help you.
51
Image acquired from Imageflip.com
https://imgflip.com/memetemplate/The-Most-Interesting-Man-In-The-World
While PVS didn’t really have any storage issues other than managing the local
writecaches, MCS did have issues when it was first relased since it was conceived in a
time when SAN’s were still roaming the earth freely and undisputed.
Problem for MCS is that because of this, it was not really a viable technology since
SAN’s run out of juice pretty quickly when a high number of VDI vms are hitting it for
IOPS.
This has become better over the years, but still when the number ofr desktops
increase, the overall performance for each desktop will go down since the SAN has to
divide the performance and capacity it has over more and more VM’s
So most of the time when people actually use MCS, they could only really do so when
utilizing local storage.
Now this brings up a whole lot of extra management complexity with it.
52
If you just use local storage, this means you first have to size right. How big will your
writecaches become, and how many VM’s do you intend to stick on a host. These
disks also need to be in some form of RAID setup to minimize failure impact.
Now when you configure XenDesktop to utilize local storage, this means you have to
configure each individual host withing XenDesktop.
And then as soon as you create or update a catalog, the vdisk needs to be copied to
all these hosts, extending image rollout times considerably, especially if you have a
large server farm.
53
A great way to solve this is to go the software defined route and opt for distributed
filesystem.
While this still leverages local storage (SSD’s preferably so it can’t bottleneck like a
SAN would), it is not managed as local storage.
The hypervisor just sees a single datastore, which also means you only need to
configure it only once in XenDesktop.
Net benefit of this is that it also just requires just a single copy to be made on image
roll out..
There however is a problem with this if you used just a typical run of the mill
distributed filesystem, that has no techniques to truly localize data) since it will not
be optimized for running a multi-reader scenario.
What will happen is that when the golden master is created, this process will write
the vdisk to the storage of the host that performs the copy. While the VM’s local to
that host will read the master vdisk locally, the other hosts in the cluster will access
the vdisk over the network. This could become a bottleneck when enough VM’s start
to read from the master. Even though the vDisk is probably replicated to other hosts ,
this is only done to assure data availability in case of a host failure. It will not
distribute the disk load balancing pursposes. At most, each host might have a small
cache to try to avoid some of the read traffic, but these caches are not optimized for
multireader scenario’s (only work for 1 VM reading its own disk) or the caches are
simply just too small to house the vdisk in the first place.
54
Nutanix solves this problem ahead of time by way of Shadow Cloning.
As soon as we detect that multiple VM’s (2 or more remote readers) pull data from
the vdisk, we mark the main vdisk as immutable which allows for distributed caching
of the entire disk. This is done on reads, block by block. This way each VM will
automatically work with localized data.
This not only relieves the network for reads (writes are local anyway), but it also
seriously improves performance.
This technology is enabled by default and requires no configuration whatsoever.
55
To summarize the benefits of running MCS on dsitributed storage:
First of all : no more multpiple image copies when rolling out. This seriously speeds
up deployment.
Secondly: no need to maintain multiple datastores, making things simpler.
Third: no IO hotspots anymore, increasing performance
56
To summarize the benefits of running MCS on dsitributed storage:
First of all : no more multpiple image copies when rolling out. This seriously speeds
up deployment.
Secondly: no need to maintain multiple datastores, making things simpler.
Third: no IO hotspots anymore, increasing performance
57
Image from imgflip.com
Here’s a before and after.
It’s obvious that it’s much simpler to manage hred 1 datastore than each indivual host
with local datastores.
With the technology of today this is now finally viable.
No need to change the hosting connection properties when you want to shutdown a
host for example.
58
Another couple of benefits of putting MCS on distirbuted storage:
1. Your VM’s stay moveable (i.e. vMotion or Live Migration) can be done, even
with local storage. With Nutanix’ Data locality the write cache data will be made
local to the new host automatically.
2. Reduced boot times. Not that this always is important since most idle pools will
be made ready ahead of the login storms, but it will also lower login times, and
improve overall end user perfromance.
3. Since we are no longer tied to a SAN infrastructure or require RAM caching
techniques to increase local IOPS, we can reach a much better scalability, since
it will be linear.
59
Shadow clones offer distributed caching of vDisks or VM data that is in a multi-reader
scenarios
Up to 50% performance optimization during VDI boot-storms and other multi reader
scenario’s.
60
Does this mean there is no beneif for PVS when using distributed filesystems?
Yes there are many!
But with PVS there was no issue with reading the master image, so that benefit
(Shadow Clones) will not apply.
What’s left?
1. No need to manage the local storage required for the writecache and make sure
it’s HA (RAID etc)
2. No need to worry about local disks filling up and crashing VM’s. It will just spill
over an leverage other hosts storage if needed.
3. No worry about local IOPS.
4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to
it’s new host thanks to Data Locality.
5. Simple configuration: just point create the writecache disk of the template VM
on the distributed datastore.
6. No need to use RAM caching technology to save on IOPS. This means more
RAM is available to VM’s, which means better scalability and higher VM density,
61
Does this mean there is no beneif for PVS when using distributed filesystems?
Yes there are many!
But with PVS there was no issue with reading the master image, so that benefit
(Shadow Clones) will not apply.
What’s left?
1. No need to manage the local storage required for the writecache and make sure
it’s HA (RAID etc)
2. No need to worry about local disks filling up and crashing VM’s. It will just spill
over an leverage other hosts storage if needed.
3. No worry about local IOPS.
4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to
it’s new host thanks to Data Locality.
5. Simple configuration: just point create the writecache disk of the template VM
on the distributed datastore.
6. No need to use RAM caching technology to save on IOPS. This means more
RAM is available to VM’s, which means better scalability and higher VM density,
62
Before we end the sesssion we have one more thing.
We have told you in the last 35 minutes what currently is available for MCS and PVS.
But this world isn’t standing still. What’s coming?
So we have a special guest today.
Please give a warm welcome to the man that actually builds this awesome MCS and
PVS technolgy: Jeff PinterParssons!
63
64
Not a product manager so no commitments
65
66
67
68
69
70
71
72
73
74
75

Más contenido relacionado

Destacado

Destacado (20)

Highlights from 2015 Citrix Customer Case Studies
Highlights from 2015 Citrix Customer Case StudiesHighlights from 2015 Citrix Customer Case Studies
Highlights from 2015 Citrix Customer Case Studies
 
SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix SYN 104: Citrix and Nutanix
SYN 104: Citrix and Nutanix
 
Citrix & Canalys: Northern European Channel Partners in a State of Transforma...
Citrix & Canalys: Northern European Channel Partners in a State of Transforma...Citrix & Canalys: Northern European Channel Partners in a State of Transforma...
Citrix & Canalys: Northern European Channel Partners in a State of Transforma...
 
Transforming Business with Citrix: Customers Share Their Stories.
Transforming Business with Citrix: Customers Share Their Stories.Transforming Business with Citrix: Customers Share Their Stories.
Transforming Business with Citrix: Customers Share Their Stories.
 
Lock it Down with Nutanix Security
Lock it Down with Nutanix SecurityLock it Down with Nutanix Security
Lock it Down with Nutanix Security
 
Customer Case : Citrix et Nutanix
Customer Case : Citrix et NutanixCustomer Case : Citrix et Nutanix
Customer Case : Citrix et Nutanix
 
Citrix Customer Story: Franciscan Missionaries of Our Lady Health System
Citrix Customer Story: Franciscan Missionaries of Our Lady Health SystemCitrix Customer Story: Franciscan Missionaries of Our Lady Health System
Citrix Customer Story: Franciscan Missionaries of Our Lady Health System
 
SYN111: What's New and Exciting with XenMobile
SYN111: What's New and Exciting with XenMobileSYN111: What's New and Exciting with XenMobile
SYN111: What's New and Exciting with XenMobile
 
SYN240: Next-Generation Management and Analytics
SYN240: Next-Generation Management and AnalyticsSYN240: Next-Generation Management and Analytics
SYN240: Next-Generation Management and Analytics
 
Citrix CTO Perspective: The Application Delivery Continuum
Citrix CTO Perspective: The Application Delivery ContinuumCitrix CTO Perspective: The Application Delivery Continuum
Citrix CTO Perspective: The Application Delivery Continuum
 
SYN 208: Power HDX 3D Applications with Intel and NVIDIA GPUs.
SYN 208: Power HDX 3D Applications with Intel and NVIDIA GPUs. SYN 208: Power HDX 3D Applications with Intel and NVIDIA GPUs.
SYN 208: Power HDX 3D Applications with Intel and NVIDIA GPUs.
 
„So nutzen Sie Xing & Co für Ihren Vertriebserfolg“
„So nutzen Sie Xing & Co für Ihren Vertriebserfolg“„So nutzen Sie Xing & Co für Ihren Vertriebserfolg“
„So nutzen Sie Xing & Co für Ihren Vertriebserfolg“
 
SYN 214: Linux Virtual Desktop Capabilities, Use Cases, Architecture, and Dep...
SYN 214: Linux Virtual Desktop Capabilities, Use Cases, Architecture, and Dep...SYN 214: Linux Virtual Desktop Capabilities, Use Cases, Architecture, and Dep...
SYN 214: Linux Virtual Desktop Capabilities, Use Cases, Architecture, and Dep...
 
SYN303: Receiver + StoreFront + Gateway
SYN303: Receiver + StoreFront + GatewaySYN303: Receiver + StoreFront + Gateway
SYN303: Receiver + StoreFront + Gateway
 
Configuring and Troubleshooting XenDesktop Sites
Configuring and Troubleshooting XenDesktop SitesConfiguring and Troubleshooting XenDesktop Sites
Configuring and Troubleshooting XenDesktop Sites
 
SYN 321: Securing the Published Browser
SYN 321: Securing the Published BrowserSYN 321: Securing the Published Browser
SYN 321: Securing the Published Browser
 
Citrix Mobile Analytics Report September 2014: Mobile subscriber data usage t...
Citrix Mobile Analytics Report September 2014: Mobile subscriber data usage t...Citrix Mobile Analytics Report September 2014: Mobile subscriber data usage t...
Citrix Mobile Analytics Report September 2014: Mobile subscriber data usage t...
 
SYN002: General Session
SYN002: General SessionSYN002: General Session
SYN002: General Session
 
SYN 305: Architecting Citrix on Microsoft Azure
SYN 305: Architecting Citrix on Microsoft AzureSYN 305: Architecting Citrix on Microsoft Azure
SYN 305: Architecting Citrix on Microsoft Azure
 
SYN 220: XenApp and XenDesktop Security Best Practices
SYN 220: XenApp and XenDesktop Security Best Practices SYN 220: XenApp and XenDesktop Security Best Practices
SYN 220: XenApp and XenDesktop Security Best Practices
 

Similar a SYN 219 Getting Up Close and Personal With MCS and PVS

Vss 101 and design considerations in v mware environment
Vss 101 and design considerations in v mware environmentVss 101 and design considerations in v mware environment
Vss 101 and design considerations in v mware environment
vanhn1205
 
Virtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And ProfitVirtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And Profit
matthew.maisel
 
Virtualization_TechTalk
Virtualization_TechTalkVirtualization_TechTalk
Virtualization_TechTalk
Arif k
 
Sdwest2008 V101 F Dpowerpoint Final
Sdwest2008 V101 F Dpowerpoint FinalSdwest2008 V101 F Dpowerpoint Final
Sdwest2008 V101 F Dpowerpoint Final
Stephen Rose
 
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
IBM India Smarter Computing
 

Similar a SYN 219 Getting Up Close and Personal With MCS and PVS (20)

Personal v disk_planning_guide
Personal v disk_planning_guidePersonal v disk_planning_guide
Personal v disk_planning_guide
 
Presentation citrix desktop virtualization
Presentation   citrix desktop virtualizationPresentation   citrix desktop virtualization
Presentation citrix desktop virtualization
 
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
 
Installation Guide
Installation GuideInstallation Guide
Installation Guide
 
Live VM Migration
Live VM MigrationLive VM Migration
Live VM Migration
 
Virtualization
VirtualizationVirtualization
Virtualization
 
OpenQrm
OpenQrmOpenQrm
OpenQrm
 
Cloud Technology: Virtualization
Cloud Technology: VirtualizationCloud Technology: Virtualization
Cloud Technology: Virtualization
 
Complemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDIComplemenatry & Alternative Solution to VDI
Complemenatry & Alternative Solution to VDI
 
Vss 101 and design considerations in v mware environment
Vss 101 and design considerations in v mware environmentVss 101 and design considerations in v mware environment
Vss 101 and design considerations in v mware environment
 
Microsoft MCSE 70-982 it dumps
Microsoft MCSE 70-982 it dumpsMicrosoft MCSE 70-982 it dumps
Microsoft MCSE 70-982 it dumps
 
Using openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual MachinesUsing openQRM to Manage Virtual Machines
Using openQRM to Manage Virtual Machines
 
Vitualisation
VitualisationVitualisation
Vitualisation
 
Virtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And ProfitVirtualizing Testbeds For Fun And Profit
Virtualizing Testbeds For Fun And Profit
 
Virtualization_TechTalk
Virtualization_TechTalkVirtualization_TechTalk
Virtualization_TechTalk
 
Sdwest2008 V101 F Dpowerpoint Final
Sdwest2008 V101 F Dpowerpoint FinalSdwest2008 V101 F Dpowerpoint Final
Sdwest2008 V101 F Dpowerpoint Final
 
Disco: Running Commodity Operating Systems on Scalable Multiprocessors Disco
Disco: Running Commodity Operating Systems on Scalable Multiprocessors DiscoDisco: Running Commodity Operating Systems on Scalable Multiprocessors Disco
Disco: Running Commodity Operating Systems on Scalable Multiprocessors Disco
 
AnsibleFest 2021 - DevSecOps with Ansible, OpenShift Virtualization, Packer a...
AnsibleFest 2021 - DevSecOps with Ansible, OpenShift Virtualization, Packer a...AnsibleFest 2021 - DevSecOps with Ansible, OpenShift Virtualization, Packer a...
AnsibleFest 2021 - DevSecOps with Ansible, OpenShift Virtualization, Packer a...
 
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
IBM’s System Director VMControl: Advanced Multi-Platform Virtualization Manag...
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 

Más de Citrix

Más de Citrix (20)

Building The Digital Workplace
Building The Digital WorkplaceBuilding The Digital Workplace
Building The Digital Workplace
 
Maximize your Investment in Microsoft Office 365 with Citrix Workspace
Maximize your Investment in Microsoft Office 365 with Citrix Workspace Maximize your Investment in Microsoft Office 365 with Citrix Workspace
Maximize your Investment in Microsoft Office 365 with Citrix Workspace
 
XenApp on Google Cloud Deployment Guide
XenApp on Google Cloud Deployment GuideXenApp on Google Cloud Deployment Guide
XenApp on Google Cloud Deployment Guide
 
Deploying Citrix XenApp & XenDesktop Service on Google Cloud Platform
Deploying Citrix XenApp & XenDesktop Service on Google Cloud PlatformDeploying Citrix XenApp & XenDesktop Service on Google Cloud Platform
Deploying Citrix XenApp & XenDesktop Service on Google Cloud Platform
 
Manage Risk by Protecting the Apps and Data That Drive Business Productivity
Manage Risk by Protecting the Apps and Data That Drive Business ProductivityManage Risk by Protecting the Apps and Data That Drive Business Productivity
Manage Risk by Protecting the Apps and Data That Drive Business Productivity
 
How do Organizations Plan to Assure Application Delivery in a Multi-Cloud World?
How do Organizations Plan to Assure Application Delivery in a Multi-Cloud World?How do Organizations Plan to Assure Application Delivery in a Multi-Cloud World?
How do Organizations Plan to Assure Application Delivery in a Multi-Cloud World?
 
Workforce Flexibility Can Drive Greater Engagement & Productivity
Workforce Flexibility Can Drive Greater Engagement & ProductivityWorkforce Flexibility Can Drive Greater Engagement & Productivity
Workforce Flexibility Can Drive Greater Engagement & Productivity
 
Citrix Cloud Services: Total Economic Benefits Assessment Guide
Citrix Cloud Services: Total Economic Benefits Assessment GuideCitrix Cloud Services: Total Economic Benefits Assessment Guide
Citrix Cloud Services: Total Economic Benefits Assessment Guide
 
The Growing U.S. IT Productivity Gap
The Growing U.S. IT Productivity GapThe Growing U.S. IT Productivity Gap
The Growing U.S. IT Productivity Gap
 
Citrix Cloud Services: Total Economic Benefits Assessment Guide
Citrix Cloud Services: Total Economic Benefits Assessment GuideCitrix Cloud Services: Total Economic Benefits Assessment Guide
Citrix Cloud Services: Total Economic Benefits Assessment Guide
 
Citrix Synergy 2017: Technology Keynote Sketch Notes
Citrix Synergy 2017: Technology Keynote Sketch NotesCitrix Synergy 2017: Technology Keynote Sketch Notes
Citrix Synergy 2017: Technology Keynote Sketch Notes
 
Citrix Synergy 2017: Malcolm Gladwell Innovation Super Session Sketch Notes
Citrix Synergy 2017: Malcolm Gladwell Innovation Super Session Sketch NotesCitrix Synergy 2017: Malcolm Gladwell Innovation Super Session Sketch Notes
Citrix Synergy 2017: Malcolm Gladwell Innovation Super Session Sketch Notes
 
Synergy 2017: Colin Powell Innovation Super Session Sketch Notes
Synergy 2017: Colin Powell Innovation Super Session Sketch NotesSynergy 2017: Colin Powell Innovation Super Session Sketch Notes
Synergy 2017: Colin Powell Innovation Super Session Sketch Notes
 
Who Are Citrix Customers?
Who Are Citrix Customers?Who Are Citrix Customers?
Who Are Citrix Customers?
 
Manage risk by protecting apps, data and usage
Manage risk by protecting apps, data and usageManage risk by protecting apps, data and usage
Manage risk by protecting apps, data and usage
 
Enterprise Case Study: Enabling a More Mobile Way of Working
Enterprise Case Study: Enabling a More Mobile Way of Working Enterprise Case Study: Enabling a More Mobile Way of Working
Enterprise Case Study: Enabling a More Mobile Way of Working
 
Life in the Digital Workspace
Life in the Digital WorkspaceLife in the Digital Workspace
Life in the Digital Workspace
 
Comparing traditional workspaces to digital workspaces
Comparing traditional workspaces to digital workspacesComparing traditional workspaces to digital workspaces
Comparing traditional workspaces to digital workspaces
 
4 Ways to Ensure a Smooth Windows 10 Migration
4 Ways to Ensure a Smooth Windows 10 Migration4 Ways to Ensure a Smooth Windows 10 Migration
4 Ways to Ensure a Smooth Windows 10 Migration
 
Citrix Paddington
Citrix PaddingtonCitrix Paddington
Citrix Paddington
 

Último

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 

Último (20)

AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 

SYN 219 Getting Up Close and Personal With MCS and PVS

  • 1. 1
  • 2. Who am I, who’s in my team, what are my resposibilities, talk about reference architectures and so on. 2
  • 3. Sports foto courtesy of Alfred Cop. 3
  • 4. 4
  • 5. Our starting point for today is the standard model Citrix uses to describe a virtual desktop. In this model , several layers and involved in handling the provisioning of the virtual desktops. The Desktop Layer describes the actual golden image, containing or running the apps the user is going to use. A best practice is to make this image as lean and mean as you can, before rolling it out, to increase scalability and user density on the system. The Control and Access layer contains the management tools we use to actually perform the provisioning operations. Studio is the main interface from which we control machine creation and assignment of users to these desktops, in combination with other consoles like the PVS management console if we chose to use that as our main provisioning tool. The hypervisor in this model can be pretty much anything we would like. New kid on the block is the Nutanix Acropolis Hypervisor, or AHV. Finally at the bottom we find the compute and storage layer, which can be designed and built in many ways, but not every combination of hardware and storage is suitable for high scale virtual desktop environments. 5
  • 6. A little history tour. Automated provisioning has not always been a part of the portfolio. A long long time ago , physicial deployments of XenApp ruled the world, leveraging the IMA architecture. The acquisition of Ardence, in 2006, brought a tool to the Citrix world that solved a big problem for XenApp customers. No longer did they need to manually or semi- automatically deploy physical installation of Windows servers and scripted installs of XenApp, but servers could be ran from a single image. Later , when VDI was starting to become a big new thing , Citrix redesigned the IMA architecture to allow for larger scale of environments, which is now called the Flexcast Management Architecture. This architecture allowed different types of desktop and apps delivery to be mixed and managed from a single console. Big part of FMA was the introduction of a new provisioning technique called Machine Creation Services. First iteration of this was focused on VDI, and later Citrix brought XenApp over to the FMA architecture as well. 6
  • 7. Finally we now also bring the Nutanix Acropolis Hypervisor into the FMA supported hypervisors. This means that AHV is now fully supported to not only run VM’s, but also leverage MCS and PVS as provisioning techniques. 7
  • 8. If we compare PVS and MCS on a high level there are 5 comparisions we can make. First of all the platform: PVS works for both virtual and physical workloads. MCS only supports a hypervisor based deployment. Delivery of the master image will go over the network (streamed) , while MCS leverages storage as it’s main delivery. The source format used with PVS is a VHDX file of the master VM, which means a conversion has to take place first. MCS leverages hypervisor based snapshots. The storage layer that’s beeing used to store the writes is local disk focused in case of PVS, while with MCS it leverages a datastore approach. These datastores can however be locally attached. Finally the infrastructure needed; PVS is made up of a separate architecture of one or more PVS servers, and might need a separate network for streaming. MCS is fully integrated into the XenDesktop Controllers. 8
  • 9. Depending on your needs, you can choose between MCS and PVS. MCS offers three modes, but they are all for virtual desktops only. Pooled: set of non-persistent VM’s, all sharing the same master image, shared across multiple users. Pooled with PVS: set of non-persistent VM’s shared across multiple users, but with a Personal vDisk attached. Dedicated: Set of persistent VM’s spun of from a master image. 9
  • 10. Pooled VM’s can be random (each time you get a new pristine VM, but it can be a different one than you had previously), or Static which means you log into the same named VM each time. It will still be cleaned on restart. PVS based desktops only have one mode, which is assigned on first use, and from that point on you will log into the same desktop each time, because it also personalises the PVD which is fixed to the VM. Dedicated VM’s can be pre-assigned or set to first use, but will in essence require “normal” pc management in terms of updates, patches and software distribution, since it will be a persistent VM after it’s deployment. Once you update the master image, only new desktops spun off the master will run the newer version of the master. PVS allows you to stream desktops to either physical or virtual machines. The most used mode of PVS is standard image mode, which means non-persistent. Persistent image mode is mostly used to update the master. 10
  • 12. Machine Creation Services mechanics: MCS is fully integrated into Citrix Studio and does not require a separate installer or configuration. It’s there on each XenDesktop Controller. MCS itself is a VM creation framework that can be learned to understand different hypervisors. No code is actually placed on the hypervisor. The method used to provision VMs and link disks to it, differs per hypervisor, but MCS fully controlls the creation of the VM’s and fires of the commands to the hypervisor’s management platform or API’s. 12
  • 13. (Image obtained and edited from http://knowyourmeme.com/memes/philosoraptor) So you could say the XenDesktop controllers actually speak multiple languages. First of all it understands how to speak to Vmware ESX, and it will do so by talking to vCenter. Hyper-V will be contacted through SCVMM, XenServer is beeing addressed through XAPI. Finally, Nutanix AHV is accessed through the Nutanix AHV plugin, which is in itself beeing accessed through the Provisioning SDK 13
  • 14. MCS itself runs as 2 services on every XenDesktop Controller. First we have the Citrix Machine Creation Service, which interfaces with the chosen hypervisor. The Citrix AD identity service talks with Active Directory and manages the domain accounts and domain memberships of the VMs. On the virtual desktops the Virtual Desktop Agent also contains a service, which is the Machine Identity Service. This services manages the VM uniqueness. 14
  • 15. MCS will enable central image management for the admin. In a pooled static scenario it works as follows: First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA. With this Golden Master, you create a Machine Catalog. When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present). This snapshot is flattened and copied to each configured datastore (in the host connection details). When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned. When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size. 15
  • 16. MCS will enable central image management for the admin. In a pooled static scenario it works as follows: First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA. With this Golden Master, you create a Machine Catalog. When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present). This snapshot is flattened and copied to each configured datastore (in the host connection details). When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned. When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size. 16
  • 17. MCS will enable central image management for the admin. In a pooled static scenario it works as follows: First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA. With this Golden Master, you create a Machine Catalog. When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present). This snapshot is flattened and copied to each configured datastore (in the host connection details). When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned. When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size. 17
  • 18. MCS will enable central image management for the admin. In a pooled static scenario it works as follows: First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA. With this Golden Master, you create a Machine Catalog. When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present). This snapshot is flattened and copied to each configured datastore (in the host connection details). When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned. When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size. 18
  • 19. MCS will enable central image management for the admin. In a pooled static scenario it works as follows: First of all you create your Golden Master VM, install your apps and the Citrix XenDesktop VDA. With this Golden Master, you create a Machine Catalog. When you run through the wizard in Studio, you will be asked which VM and which snapshot to use (it will create a snap if none is present). This snapshot is flattened and copied to each configured datastore (in the host connection details). When the image has been copied, a preparation procedure is started to create the identity disk. This VM is then discarded and the main catalog VM’s are beeing cloned. When cloned, each VM is attached to the master image (read only) and to a diff disk (writable) and an ID disk (read only). The diff disk can grow as writes happen, but the ID disk is a fixed 16 MB size. 19
  • 20. If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio. It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want. When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared. 20
  • 21. If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio. It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want. When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared. 21
  • 22. If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio. It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want. When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared. 22
  • 23. If you want to do updates, all it takes it to perform the update is to boot up the master, perform the change and choose the “Update Catalog” function in Studio. It then creates a snapshot and copies a new flattened image to the datastore(s). Depending on the options you choose you can have all VM’s pointed to the new image right away, or do it in a rolling fashion. You can also rollback later if you want. When you roll out a new image, on restart of the VM’s, they are pointed to the new version, and the diffs are cleared. 23
  • 24. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 24
  • 25. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 25
  • 26. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 26
  • 27. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 27
  • 28. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 28
  • 29. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 29
  • 30. Now lets take a look at how Citrix Studio connects with all the different hypervisors. Citrix Studio normally resides on a XenDesktop controller en interfaces with the different services running on the host. These services take care of the brokering, manage the hosting connection and do the MCS related tasks we mentioned earlier. Studio is not the only way to interface with the core of XenDesktop, Powershell CMDlets are also available directly. When using Vmware, Citrix Studio talks to vCenter. vCenter therefore needs to be made highly available if you want to make sure You can always manage your environment or have Studio manage the VMs, maintain the idle pools etc. Should vCenter go down, you will not lose the current sessions, and any VMs that have already registered themselves with the broker are still available for login. vCenter in turn does the actual power on/power off of the VMs and the tasks related to provisioning. To get the most your storage and use the benefits on thin provisioning, Citrix recommends using NFS datastores connected to the hypevisor hosts. VMWare will use its own vmdk disk format. 30
  • 31. If you take a look at the NFS datastore, you will find each provisioned VM to be in its own folder. Actually Vmware has the tidiest folder structure, when you compare it to other hypervisors, and it’s easy to see which files do what. The Master vdisk is placed in a separate folder in the root of the datastore(s) Directly linked to a pooled static VM are two vmdk files. The first one is the Identity disk, and will not exceed 16 MB of space. In practice its about half of that. Then the second disk is a delta.vmdk, but this is actually a redirector disk. More on than in a few slides. 31
  • 32. If you open the identity disk vmdk you can see it’s readable text. It contains many variables that have been set by Studio, to help make the VM unique, even though it has spun off a master disk. These variables are picked up by the VDA. Amongst others, you will find a desktop ID there, but also the Catalog ID the VM is a part of and the licensing server this VDA ties to. 32
  • 33. If you open the delta.vmdk thats directly linked to the VM (i.e. configured in the VMX file) you will see it’s plain text as well. In there you can see it is beeing linked to a parent vDisk which is actually the Master image you created the catalog with. You can see the name of the disk is the same as the Catalog. Secondly the redirector points another delta.vmdk which is actually the writecache. 33
  • 34. When you boot the VM, you will see more files beeing created, of which two files a REDO logs. The delta.REDO is the actual writecache, and REDO files will be cleared on restart of the VM. Now these files are not the same as snapshots, since they only save the disk state and not anything else that might have changed in the config of the VM, or it’s memory and CPU state. 34
  • 35. Should you copy a 1 GB file to the virtual desktop, you can actually see the write cache grow. The reason Studio sets up the writecache this way (using a redirector and REDO files ) is very smart because it will not have to do anything to clean out the writecache, it just has to issue a restart of the VM (not a reboot) 35
  • 36. Hyper-V’s architecture is similar for the Studio part of course, but Studio needs to interface with System Center Virtual Machine manager to be able to communicate with the Hyper-V hosts. Just pointing Studio to SCVMM is not enough, you also need to install the SCVMM Admin console on each XenDesktop Controller. Same thing applies as with Vmware, you need to make sure SCVMM is highly available to be able to have Studio manage the VM’s. Hyper-V hosts prefer SMB datastores and they will use the VHDX format to provision disks. 36
  • 37. If you use XenServer, Studio can talk directly to the XenServer Poolmaster and needs no management layer in between as the management layer for XenServer is more or less embedded Still you have to make sure the Poolmaster is always reachable. When using XenServer, Studio again prefers NFS , and on this it will utilise the VHD file format. 37
  • 38. If you look at the file structure for XenServer, you will see that vdisks are created with GUIDs as their names. It’s not clear from just the name or folder to hwat VM a disk belongs. To get that insight you’d have to go the command line route, and use “xe vm list” etc to work out which is with. From the XenCenter GUI you can see what disks are attached. You also might notice the “Preparation” VM that’s beeing booted up during catalog creating, during which it will do a mini sysprep and generate the identity disk info to be used for the rest of the VM’s. The base disk itself will be copied from the Master VM into each configured datastore. 38
  • 39. Once you have the VM’s created you will see these pairs of the id and writecache disks (a VHD chain is beeing built). 39
  • 40. The last hypervisor we look at (and most certainly not the least!) , is the Nutanix Acropolis Hypervisor. AHV has been around for a while but we’ve only really started calling it AHV since june last year when we released a new version of it at Nutanix .Next in Miami (which will be held in Vegas in 2 weeks time btw). We’re proud to announce full GUI based support for AHV in XenDesktop, which includes the use of Machine Creation Services. How does it work? First of all you need XenDesktop 7.9, which has the latest version of the Provisioing SDK installed on it. Together with the Nutanix AHV plugin you install on every controller, you can now have Studio talk natively to Acropolis. Nutatix clusters are automatically highly available because of the distributed architecture, so you only need to point Studio to the Nutanix cluster IP address and you’re done. It works the same way from that point on, and you create catalogs based on snapshots, which will then lead to provisioned VM’s with ID disks and writecaches. 40
  • 41. Under water we do a couple of things differently than the previsou 3 hypervisors. We use a full clone approach, with copy on write functionality. Each VM is linked to the master image, but will show up as the full vdisk, also the 16 MB ID disk is beeing attached. These vDisks are thin provisioned, and while they show a usage of 10 GB in the above example, the actual data usage will be much lower, since we deduplicate and compress data on the storage level. After every restart of the VM, the writecache disk is reset. 41
  • 42. Here you see the actual logical files when looked at the datastore through a WinSCP client, with the writecache disks at the top, and the ID disks at the bottom. 42
  • 44. VIDEO Everyone remember this nice little Ardence video? It sure made an impact, since we are still using this technology today, to literally deliver hundreds of thousands of desktops to end user. PVS is a streaming service and operates mostly on the network level. It uses the exact same streaming method regardless of hypervisor, and as such does not need to be adapted or learned new hypervisors. As long as the hypervisors support PXE or boot iso methodes of booting the VM’s, you’re good to go. PVS actually intercepts the normal HDD bootprocess and redirects it over the network to a shared vdisk. The PVS servers you need actually only control the streaming of the vdisks, Studio is still required to do the VM start/restart operations. Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot order of the VM’s. Before you can use PVS, there is an image conversion process you need to perform. 44
  • 45. VIDEO Everyone remember this nice little Ardence video? It sure made an impact, since we are still using this technology today, to literally deliver hundreds of thousands of desktops to end user. PVS is a streaming service and operates mostly on the network level. It uses the exact same streaming method regardless of hypervisor, and as such does not need to be adapted or learned new hypervisors. As long as the hypervisors support PXE or boot iso methodes of booting the VM’s, you’re good to go. PVS actually intercepts the normal HDD bootprocess and redirects it over the network to a shared vdisk. The PVS servers you need actually only control the streaming of the vdisks, Studio is still required to do the VM start/restart operations. Pre-Existing VM’s can be used, or added using a wizard within PVS that sets the boot order of the VM’s. Before you can use PVS, there is an image conversion process you need to perform. 45
  • 46. Let’s take a look at PVS architecture. PVS works with a separate infrastructure in addition to Studio, and also needs to be sized correctly for the number of desktops it’s going to provision, and offcourse made HA. In practice you will always have more than one PVS server, and the number of desktops you can stream per PVS server lies in the hundreds to a couple of thousand depending on the PVS server’s specs. A best practice is to use separate network segments or vlans to separate the (mostly read-only) treaming traffic of the vdisks. To allow a VM to read from a vdisk, it needs to be part of a Device Collection on the PVS server, and the actual tie to the vdisk is done on MacAddresses. This allows for quick swapping of vdisks, by just dragging and dropping. A vdisk can be in 3 modes, of which Standard and Prviate mode are the most used. Standard mode enables a 1 to many scenario, enables the writecache and makes the vdisk read-only. Private mode is mostly used to do updates to the vdisk, as it will make the vdisk writeable, but only allows 1 VM to boot it. There is a hybrid mode in PVS, but it is hardly ever used. 46
  • 47. These vdisks are streamed to the VM’s, who can initiate the bootprocess by either PXE or mounting a boot iso (BDM). PXE requires some extra DHCP settings (options 66 & 67 ) to be set, and might pose a challenge to use in an environment where more services depend on PXE. BDM solves that problem, but needs more configuration on the VM side, and the administraiton of the boot is. The writes in PVS will go to the writecache, and this can be place on different location. Most often used , is a local disk (the actual disk mounted to the VM, but this can be configured as to be place on a local datastore). Second option is to put the writecache in RAM, but this is a little tricky because RAM is not as abundantly available as disk is, and when the RAM fills up, the VM will halt or even blue screen. It’s the same thing that could happen when you run out of disk space. The 3 option is to place the writecache back on the PVS server, but this is hardly every done as it will create serious bottlenecks and other management issues. A 4th hybrid form has become more popular recently, as it first uses RAM, and when that is full, it will overflow to disk. While it might sound great as it lowers IOPS requirements at first, there are some downsides to it, because when the RAM actually overflows, performance might go down. So sizing this correctly and keeping a tight watch on it is very important. 47
  • 48. The vDisk creation process is a bit more time comsuming than MCS since you need to actually copy the entire disk contents to a VHD file using a wizard, after creating the Master VM. To be able to do this you need to install the Target Device tool on the VM as well as the VDA. It is not part of the VDA. When you do this, you can then run the Imaging Wizard to create the vdisk. The Imaging Wizard also has some options to tune the VM by optimizing some settings, but it is not very extensive. Keep using the Best Practices guides and tools for that, that are available in the community. When you created the vdisk, you can then create a device collection (either manually or using a wizard) an literally drag and drop a vdisk on top of it. It makes the version management of the solution very easy. Switching back and forth between vdisks only requires a reboot of the devices. 48
  • 49. So how can we get the most out of these provisioning techniques. 49
  • 50. So how can we get the most out of these provisioning techniques. 50
  • 51. The most improtant thing is, to choose the right solution first. We’ve created this little flow chart to help you. 51
  • 52. Image acquired from Imageflip.com https://imgflip.com/memetemplate/The-Most-Interesting-Man-In-The-World While PVS didn’t really have any storage issues other than managing the local writecaches, MCS did have issues when it was first relased since it was conceived in a time when SAN’s were still roaming the earth freely and undisputed. Problem for MCS is that because of this, it was not really a viable technology since SAN’s run out of juice pretty quickly when a high number of VDI vms are hitting it for IOPS. This has become better over the years, but still when the number ofr desktops increase, the overall performance for each desktop will go down since the SAN has to divide the performance and capacity it has over more and more VM’s So most of the time when people actually use MCS, they could only really do so when utilizing local storage. Now this brings up a whole lot of extra management complexity with it. 52
  • 53. If you just use local storage, this means you first have to size right. How big will your writecaches become, and how many VM’s do you intend to stick on a host. These disks also need to be in some form of RAID setup to minimize failure impact. Now when you configure XenDesktop to utilize local storage, this means you have to configure each individual host withing XenDesktop. And then as soon as you create or update a catalog, the vdisk needs to be copied to all these hosts, extending image rollout times considerably, especially if you have a large server farm. 53
  • 54. A great way to solve this is to go the software defined route and opt for distributed filesystem. While this still leverages local storage (SSD’s preferably so it can’t bottleneck like a SAN would), it is not managed as local storage. The hypervisor just sees a single datastore, which also means you only need to configure it only once in XenDesktop. Net benefit of this is that it also just requires just a single copy to be made on image roll out.. There however is a problem with this if you used just a typical run of the mill distributed filesystem, that has no techniques to truly localize data) since it will not be optimized for running a multi-reader scenario. What will happen is that when the golden master is created, this process will write the vdisk to the storage of the host that performs the copy. While the VM’s local to that host will read the master vdisk locally, the other hosts in the cluster will access the vdisk over the network. This could become a bottleneck when enough VM’s start to read from the master. Even though the vDisk is probably replicated to other hosts , this is only done to assure data availability in case of a host failure. It will not distribute the disk load balancing pursposes. At most, each host might have a small cache to try to avoid some of the read traffic, but these caches are not optimized for multireader scenario’s (only work for 1 VM reading its own disk) or the caches are simply just too small to house the vdisk in the first place. 54
  • 55. Nutanix solves this problem ahead of time by way of Shadow Cloning. As soon as we detect that multiple VM’s (2 or more remote readers) pull data from the vdisk, we mark the main vdisk as immutable which allows for distributed caching of the entire disk. This is done on reads, block by block. This way each VM will automatically work with localized data. This not only relieves the network for reads (writes are local anyway), but it also seriously improves performance. This technology is enabled by default and requires no configuration whatsoever. 55
  • 56. To summarize the benefits of running MCS on dsitributed storage: First of all : no more multpiple image copies when rolling out. This seriously speeds up deployment. Secondly: no need to maintain multiple datastores, making things simpler. Third: no IO hotspots anymore, increasing performance 56
  • 57. To summarize the benefits of running MCS on dsitributed storage: First of all : no more multpiple image copies when rolling out. This seriously speeds up deployment. Secondly: no need to maintain multiple datastores, making things simpler. Third: no IO hotspots anymore, increasing performance 57
  • 58. Image from imgflip.com Here’s a before and after. It’s obvious that it’s much simpler to manage hred 1 datastore than each indivual host with local datastores. With the technology of today this is now finally viable. No need to change the hosting connection properties when you want to shutdown a host for example. 58
  • 59. Another couple of benefits of putting MCS on distirbuted storage: 1. Your VM’s stay moveable (i.e. vMotion or Live Migration) can be done, even with local storage. With Nutanix’ Data locality the write cache data will be made local to the new host automatically. 2. Reduced boot times. Not that this always is important since most idle pools will be made ready ahead of the login storms, but it will also lower login times, and improve overall end user perfromance. 3. Since we are no longer tied to a SAN infrastructure or require RAM caching techniques to increase local IOPS, we can reach a much better scalability, since it will be linear. 59
  • 60. Shadow clones offer distributed caching of vDisks or VM data that is in a multi-reader scenarios Up to 50% performance optimization during VDI boot-storms and other multi reader scenario’s. 60
  • 61. Does this mean there is no beneif for PVS when using distributed filesystems? Yes there are many! But with PVS there was no issue with reading the master image, so that benefit (Shadow Clones) will not apply. What’s left? 1. No need to manage the local storage required for the writecache and make sure it’s HA (RAID etc) 2. No need to worry about local disks filling up and crashing VM’s. It will just spill over an leverage other hosts storage if needed. 3. No worry about local IOPS. 4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to it’s new host thanks to Data Locality. 5. Simple configuration: just point create the writecache disk of the template VM on the distributed datastore. 6. No need to use RAM caching technology to save on IOPS. This means more RAM is available to VM’s, which means better scalability and higher VM density, 61
  • 62. Does this mean there is no beneif for PVS when using distributed filesystems? Yes there are many! But with PVS there was no issue with reading the master image, so that benefit (Shadow Clones) will not apply. What’s left? 1. No need to manage the local storage required for the writecache and make sure it’s HA (RAID etc) 2. No need to worry about local disks filling up and crashing VM’s. It will just spill over an leverage other hosts storage if needed. 3. No worry about local IOPS. 4. PVS-ed VM’s with a writecache stay movable. Writecache will follow the VM to it’s new host thanks to Data Locality. 5. Simple configuration: just point create the writecache disk of the template VM on the distributed datastore. 6. No need to use RAM caching technology to save on IOPS. This means more RAM is available to VM’s, which means better scalability and higher VM density, 62
  • 63. Before we end the sesssion we have one more thing. We have told you in the last 35 minutes what currently is available for MCS and PVS. But this world isn’t standing still. What’s coming? So we have a special guest today. Please give a warm welcome to the man that actually builds this awesome MCS and PVS technolgy: Jeff PinterParssons! 63
  • 64. 64
  • 65. Not a product manager so no commitments 65
  • 66. 66
  • 67. 67
  • 68. 68
  • 69. 69
  • 70. 70
  • 71. 71
  • 72. 72
  • 73. 73
  • 74. 74
  • 75. 75