In this 90-minute hands-on workshop, some of the key contributors to OpenNebula will walk attendees through the configuration and integration aspects of the computing subsystem in OpenNebula. The session will also include lightning talks by community members describing aspects related to Hypervisors and Containers with OpenNebula:
Deployment scenarios
Integration
Tuning & debugging
Best practices
5. Reference Architecture
Basic Advanced
Operating
System
Supported OS (Ubuntu or CentOS/RHEL) in all machines
Specific OpenNebula packages installed
Hypervisor KVM
Networking VLAN 802.1Q VXLAN
Storage Shared file system
(NFS/GlusterFS) using qcow2
format for Image and
System Datastores
Ceph Cluster for Image
Datastores, and a separated
Shared FS for System
Datastore
Authentication Native authentication or Active Directory
Basic and Advanced Implementations
7. Reference Architecture
Network Implementations
Private
Network
Communication between VMs.
Public Network To serve VMs that need internet access
Service
Network
For front-end and virtualization node communication
-including inter node communication for live migration-, as
well as for storage traffic
Storage
Network
To serve the the shared filesystem or the Ceph pools to the
virtualization nodes
12. Cgroups
What is?
● Enforce CPU assigned to a VM
● VM with CPU=0.5 gets half of another VM CPU=1.0
● You can limit the total memory used by the VMs
How?
● Check your distro
● Configuration in the hosts (not in the front-end)
● There is a cgroups service
● Enable in /etc/libvirt/qemu.conf
● Add libvirt to /etc/cgrules.conf
13. Fast VM Deployments
● Libvirt listens by default on a unix socket
● No concurrent operations
/etc/one/sched.conf
# MAX_HOST: Maximum number of Virtual
Machines dispatched to a given host in
# each scheduling action
#
MAX_HOST = 1
● Enable TCP socket in libvirtd.conf
14. RAW
If it's supported by Libvirt… it's supported by OpenNebula
RAW = [
type = "kvm",
data = "<devices>
<serial type="pty"><source path="/dev/pts/5"/><target
port="0"/></serial>
<console type="pty" tty="/dev/pts/5"><source
path="/dev/pts/5"/><target port="0"/></console>
</devices>"
]
Libvirt Deployment File (XML)
15. Improve Performance
● Paravirtualized drivers
● Network
● Storage
Enable it by default:
/etc/one/vmm_exec/vmm_exec_kvm.conf
NIC = [ MODEL = "virtio" ]
/etc/one/oned.conf
DEFAULT_DEVICE_PREFIX = "vd"
virtio
16. Further Tips
KSM
● Kernel Samepage Merging
● Combines Memory private pages
● Increases VM density
● Enabled by default in CentOS
SPICE
● Native in OpenNebula >= 4.12 (qlx display Driver)
● Redirect printers, USB (mass-storage), Audio
17. Further Tips
Virsh Capabilities
/usr/share/libvirt/cpu_map.xml
OS = [ MACHINE = "..." ]
Cache
● Writethrough
○ host page on, guest disk write cache off
● Writeback
○ Good overall I/O Performance
○ host page on, disk write cache on
● None
○ Good write performance
○ host page off, disk write cache on
18. vCenter Approach
KVM
Virtual Infra Management
•Capacity management
•Multi-VM management
•Resource optimization
•HA and business continuity
OpenNebula
Cloud Management
•VDC multi-tenancy
•Simple cloud GUI and interfaces
•Service elasticity/provisioning
•Federation/hybrid
vCenter
VMware
OpenNebula
20. Reference Architecture
Description
Front-end Supported OS (Ubuntu or CentOS/RHEL)
Specific OpenNebula packages installed
Hypervisor VMware vSphere (managed through vCenter)
Networking Standard and Distributed Switches (managed through
vCenter)
Storage Local and Networked (FC, iSCSI, SAS) (managed
through vCenter)
Authentication Native authentication or Active Directory
Summary of the implementation
24. Overview
Key Points
● VMware workflows
● Leverages vMotion, HA, DRS
● Templates and Networks must exist
● Each vCenter cluster is a Host
○ OpenNebula chooses the Host (vCenter cluster)
○ VMware DRS chooses the ESX Host
● VMware tools in guest OS
Limitations
● Security Groups
● Files passed in the Context
26. Importing Clusters
● Sunstone to import vCenter Clusters
● CLI Tool also provides that functionality
● Manages subsequent import actions
27. Importing Templates
● A Template must be already defined in OpenNebula.
● It must contain all the basic information to be deployed
● During instantiation we can add an extra network, but not
remove them.
30. Importing Networks
● The Network must exist in OpenNebula.
● When importing, we can assign an IP range for the
Network
31. Importing VMs
● Wild VMs can be imported
● After importing, VMs can be managed by OpenNebula
● The following operations cannot be performed:
○ delete --recreate
○ undeploy
○ migrate
○ stop
32. Importing Datastores and VMDKs
● Available through CLI and Sunstone
● Same mechanism as with VMs, Networks and Templates
33. Importing Datastores and VMDKs
vCenter datastores supported in OpenNebula
● Monitorization of Datastores and VMDKs
● VMDK Creation
● VMDK Upload
● VMDK Cloning
● VMDK Deletion
Persistent VMDK
VMDK Hotplug supported
● Attach disk
34. Contextualization
● Two supported Contextualizations methods:
○ vCenter Customizations
○ OpenNebula
● OpenNebula Contextualization works both for Windows
and Linux.
● START_SCRIPT is supported
35. Scheduling
● OpenNebula chooses a Host (vCenter Cluster)
● The specific ESX is selected by vCenter (DRS)
● The specific Cluster can be forced:
SCHED_REQUIREMENTS = "NAME="<vcenter_cluster>""