LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Plase 1: Many issues. Commoditized hardware and one app per server model has created a monster. Average utilization of servers ~ 15% ( Source: Gartner) Space, power, cooling challenges abound. Nearly 80% of IT costs spent just to keep the lights on…let alone innovate. Etc… Phase 2: Virtualization begins to take root in test/dev. Benefits of consolidation starting to be seen for some production apps (“Craplications”) This is goodness. Phase 3: Virtualization beginning to be seen for more than just consolidation. Moving to more business and mission critical apps. HA and recovery (DR) become focus areas. Virtualization seen as a way to eliminate planned downtime. Again, more goodness… Phase 4: Goes beyond “Agility” to policy based computing and new ways (paradigms) to deliver apps. Applications may become largely streamed and diskless. More “Utility-like” Virtualization is a key enabler and begins to become pervasively adopted. BUT… reality is that very few severs (even today) are virtualized. We’re still a long way off from seeing pervasive adoption of virtualization. Still in the low teens. (see next slide…) 79% of IT budget to keep the business running Source: Gartner Server sprawl, electricity, floor space Managing too many physical servers
We must first start with the underpinnings of XenServer and that’s the Xen hypervisor. Leverages Open Source standard Lean with < 50K lines of code Work closely with OS vendors and advisory board: Citrix, IBM, Intel, HP, Novell, Red Hat and Sun Microsystems Community patches, updates and enhancements Security CIA and NSA actively contribute to Xen XS Differentiation from Xen Open Source Management console; XenMotion; Templates; Optimization
VM’s on failed physical servers can automatically be restarted on other servers in the pool
From an architectural point of view, a XenServer virtual machine (VM) consists of two components: metadata describing the virtual machine environment, and the virtual disk image (VDI) which is being used by the virtual machine. The VM metadata is stored in a small database on the XenServer hosts, and the virtual disk images are stored on the configured Storage Repository, which in multiple host deployments will be a NAS or SAN device. The metadata for a VM contains information about the VM (e.g. name, description, uuid), VM configuration (e.g. amount of virtual memory, number of virtual CPU’s) and information about the use of resources on the host or Resource Pool (e.g. Virtual Networks, Storage Repository, ISO Library).
To provide effective Disaster Recovery (DR), we need to replicate both the metadata and the virtual disk images from our production environment to our DR environment. This is easily accomplished by exporting the metadata from the production environment, and importing this data into the DR environment. The replication of the virtual disk image is best handled by the storage vendor, as they will vary from device to device, but any real-time or scheduled replication system will suffice. Later on during this presentation we will hear a little bit more about which solutions NetApp can offer.
Even if you are not using remote storage you can backup VMs and move them around using our import/export functionality. Again since the VMs are isolated from any hardware differences between the underlying servers you remove all of the driver headache found when moving a physical OS instance around to different boxes.
With virtual desktop delivery we can provide desktops as a service to the users.
How XD works is essentially quite similar to how end users connect to XenApp. The end user will get to the landing page shown earlier and enter their credentials at which point, the request for the desktop are sent to the delivery controller. The delivery controller then works out which desktop is appropriate for the end user. It establishes that from the data store. What it does then is to set up the environment for the end user to connect to. If it’s a provisioned desktop (the way that we’ve recommended that you implement the product) and the environment is not already spun up, it will initiate a boot of the virtual machine and the operating system will be delivered from Provisioning Server. In the setup for XD, you have an alternative option to eliminate the startup delay for the end user. You can do this by configuring that for particular desktop groups, you want to specify an idle pool. From marketing perspective, this will be known as an ‘instant on’ capability. What that really means is that on the backend, we can have the virtual machines pre-launched such that when the user connects, they get an instant on experience. The way that this is configured on the desktop delivery controller is for each group, you specify the range of time you want a specific number of idle machines available. For example you can say that from 9am to 5pm you want 15 machines spun up and ready for use, where as outside of working hours, you bring it down to 2 or 3. Then the Desktop delivery controller will manage your infrastructure accordingly so that the user gets an instant on experience. That integration is only available for virtual machines at launch, but an SDK will be coming later on for customers to modify that for support with blades. We’re going to introduce blade support as we move forward in upcoming releases. Once the VM has been started, we send a preparation for connection to the VDA (the thing that delivers the ICA experience from the virtual desktop). The VDA resides on the virtual desktop. Its not always waiting for a connection. From a security perspective, we wanted to be able to control when you’d be able to connect to those virtual machines. Not until an end user authenticates and the desktop delivery controller sends a preparation message to the VDA that you can connect to a virtual machine using ICA. This is a nice security feature that we’ve implemented to control who can access the virtual desktop and when. What happens then is that the Desktop Receiver connects to the VDA. What's important about this is that clearly unlike the first release of DS v1, this is a direct ICA connection to whatever the virtual desktop is running on, virtual machine or the blade. This is a direct ICA experience just like customers are used to with XenApp. What we do then is that we validate through examining a ticket that was previously created, whether that is the right user that is supposed to be connecting to the virtual machine and if that is not correct, we drop the connection. We consume a license (XenDesktop will be licensed on a CCU basis, just like XenApp). Then we implement policies for the way that that ICA connection is delivered. This is the point at which we would in XenApp say which apps are available. In XenDesktop this is the point at which we control policies like the ‘end user is able to map drives from the local machine’. We have full support for the policies that customers are familiar with in XenApp as well as the SmartAccess capabilities of the AG product line. That is when those policies would be implemented. That is essentially how XD works.
Just like we described in the whiteboard discussion, the traditional way of doing it is to load everything onto individual desktops or laptops – all the client software of client/server applications, the desktop applications, the web clients, etc. This is costly to manage and support in this distributed fashion – never mind trying to secure it on the endpoints, lock down devices, ensure client-side compatibility, etc. So, rather than “deploy” applications, “deliver” them using Presentation Server and its application virtualization and application streaming features.
With application virtualization, those applications are centralized – the application is no longer installed on the endpoint, instead it is installed on servers in the data center, where you can monitor, control, update and secure them. The client device doesn’t actually need to process the application at all – freeing the application from client-side dependencies altogether. This is why we call this virtualization.
The power of OS Streaming technology is more evident the wider it gets deployed. Simplifies intractable problems. Now, rather than having dedicated back up sites, multiple sites can back up to the same datacenter. This creates big savings.
The purpose of today’s HOT session is to update the environment for a fictitious company named SNR, inc. This slide represents the current environment. Remote employees have access to the internal network through an IPSec VPN. This works but there is no way to enforce conditional access. Everyone that logs in through the IPSec VPN receives the same level of access.
SmartAccess is not a feature but rather a concept. SmartAccess incorporates the following: Who is connecting? Access Gateway uses EPA scans to determine various characteristics about a client device. What is the result of the connection? Once users are authenticated, will they receive a full VPN connection, clientless access, Web Interface/Published Applications, etc? What resources can be accessed? Will users receive full access to internal network resources or only a subset of resources? How will users be able to access these resources (published apps only, FTA, etc.)?
Here is an example of providing different levels of access, based on the results of client security scans. Here, if the user doesn’t have Windows XP they are denied. Full access is given to PC’s with Prism, Symantec, a particluar registry key, and are running XP. Access is reduced as the users have fewer of these components.
In most environments the communication between the AGEE and the backend servers would be as follows: Communication to the DNS and Authentication Server will occur in most topologies using the Netscaler IP. In this example, we are using LDAP or LDAPS which means we need 389 and 636 open from the DMZ to the private network. Communication to the Web Interface and XenApp Servers will occur using the Mapped or Subnet IPs. Typical ports required at the firewall for allowed access will be 80, 443, 1494, and 2598. Last, management traffic generally comes from an internal device to the NSIP. Initially users will connect over 80 or 443 to the NSIP, but once the java administration applet has luanched connections will occur over 3010 for unsecure and 3008 for secured. Depending on whether you connected to HTTP or HTTPS initially will determine if the actual management connection is secured or not.
The Access Gateway Wizard can be used to create/edit virtual servers, bind certificates, configure DNS/WINS settings, configure authentication settings, specify default authorization settings and access scenarios.
Availability The Workflow Studio platform is available as a feature of the Citrix Delivery Center. All customers of XenApp, XenDesktop, XenServer, and NetScaler current on SA for all editions except the Express Editions will be able to download from MyCitrix.
Workflow Studio builds on the capabilities of Workflow Foundation and PowerShell. Workflow Foundation provides the Visual Designer, the base activity library functionality, and runtime services. Workflow Studio extends each of these to target an IT Professional. Workflow Studio also melds Workflow Foundation with PowerShell, providing native support for PowerShell activities (not available yet in Workflow Foundation.) Starting at the bottom: Automation is desired for a product or a group of products (both Citrix and 3 rd party) Products expose functionality through an API Activity Libraries expose this functionality to a workflow developer Workflows can be created that solve business problems Activity libraries can be implemented as a raw translation of the product API, but this is not the most usable method. Some thought needs to be put into what we expect customers to do with a workflow. We have built support for Snippets into the core product to help facilitate this process, but these activity libraries still need to be ‘designed’.
Let’s review the list of activity libraries that are available today for Workflow Studio. Many Windows systems are supported today: Active Directory and Group Policy libraries provide the building blocks necessary for user provisioning and rights management The Networking library provides the ability to do remote shutdown of your Windows servers and desktops as well as supporting WakeOnLAN for power on technology The Windows and WMI libraries offers a broad range of activities for typical Windows OS management. Reading from the Windows Registry, querying performance counters, manipulating files, and accessing any data exposed via WMI The PowerShell library exposes the functionality of PowerShell, and in particular allows you to import and export CSV files Initial Citrix libraries are available for XenApp, XenServer, and NetScaler today with XenDesktop support and deeper integration with product sub-features (like Provisioning Server and StorageLink) coming soon. An SDK is also available if you want to build your own libraries to integrate with other products.