Nell’iperspazio con Rocket: il Framework Web di Rust!
How I reshaped my lab environment
1. +
From Zero to Colo - vCloud Director in my lab
With Mike Laverick (VMware)
Blog: www.mikelaverick.com
Email: mike@mikelaverick.com
Twitter: @mike_laverick
2. +
Before I begin
Thank You VMUG Leaders!
Competition Is Good…
www.eucbook.com
4. +
Agenda
The Home Lab Backstory - Long, Long ago
in a galaxy called 2003….
Former vSphere Setup
CH-CH-CH Changes - vSphere5.1 Setup
Compute
Network
Storage
vCD Lesson Learned…
My Lab To-Do List…
6. +
The Home Lab Backstory - Long,
Long ago in 2003…
My first attempt with ESX 2.0/vCenter 1.0
Location: Under my desk
Girlfriend Impact: NIL
7. +
The vCloud Suite: SDDC Era
Virtual Appliances where possible/necessary
vCenter Server Appliance (VCSA)
Feature Parity with Windows version
Switch allowed me to completely reconfigure resources around vCloud/SDDC
agenda
Reduce “infrastructure VM” footprint
Beware of plug-ins; Support for the web-client (e.g. NetApp VSC)
vCloud Director Virtual Appliances (vCD-VA)
Use built-in Oracle XE DB
Dead easy to setup (No Packages, DB setup)
Beware: No multi-cell, No migration
Beware: Demo only; Labs; Training purposes…
vShield Manager Virtual Appliance (Mandatory)
vSphere Replication Appliance (VR)
vSphere Data Protection Appliance (vDP)
8. +
vSphere5/SRM5.0/View5.1 Era –
SRM 5.0 Period (2011)
Hello 2x Dell Equallogics
Hello 1x NS-120 & 1x NS-20
Hello 2x NetApp 2040s
Hello massive colocation bill!!!
VMware Employee Period (2012) >>>>>>>>>>>
HomeLab & ProLab Merge
Goodbye EMC
Goodbye 2xPDU
Hello 24U of extra racks space
Hello to 14 AMPs extra power!
Location: Quality Colocation
Costs: £870 GBP, $1,300 USD
Girlfriend Impact: Married 2013,4th May
9. +
Virtual Silos
The VMware Cluster as the New Silo?
Discrete Blocks of:
Compute
Network
Storage
Q. Why do we like silos?
Q. Why do we hate silos?
11. +
Compute Continued…
One Site; Two Clusters
“Infrastructure” Resource Pool – No Management Cluster
GOAL: Maximize Resource; Setup Tiered Clusters
Decisions:
Different CPU types forced DRS separation
Gold Cluster = HP DL 385s
WHY? = More memory & FC connected to SAS storage
Silver Cluster = Lenovo TS200
WHY? = Less RAM, Only 1GP pipe to either SAS/SATA on NFS/iSCSI
14. +
Storage Anxieties…
Many Organizational Tenants sharing the SAME datastore
What about Site Recovery Manager?
What about performance – Capacity management isn’t the issue
With Array-based Replication (ABR)
One Failover to rule them all?
No per-vApp Failover
No per-Organization failover
Solutions?
Platinum/Gold datastores per-Organization
vSphere Replication
VMware vVols
16. +
Network Continued…
Goodbye Standard Switch
Struggle to provide redundancy/separate with the “Combo
Approach”
Many of the Adv features of vCD require Distributed vSwitch
Classical Approach:
Two DvSwitches
One for internal vSphere Networking (vMotion, IP Storage, FT,
Management)
One for Virtual DataCenter
Backed by two VMNICs each…
17. +
Network Anxieties…
All my Provider vDCs share the SAME DvSwitch
What about “Fat Finger Syndrome”?
How realistic is that?
Time to re-examine “Best Practices”
Do best practices represents an ideal OR an ideal filtered
through the limitations of a technology
Provider vDCs in vCD 1.x – One Cluster, No Tiering of Storage
Provider vDCs in vCD 5.x – Many clusters, Tiering of Storage
18. +
Lesson Learned
When thinking about a Provider vDC
All the resources matter
Compute + Storage + Networking
By far the easiest for me was compute
But my “Gold” cluster has no FT Support
Prepare to make compromises/trade offs
UNLESS all your hosts are the SAME
VXLAN needs enabling on Distributed Switches via vSphere
Client
Prior to creating a Provider vDC
Watch out with VMs already on the cluster – vCD ESX Agent
Running existing “infrastructure” VMs on a cluster
Stops the install of the vCD Agent
Has to be done on per-ESX host basis (easy)
19. +
More Lessons Learned…
Get your VLANs sorted BEFORE you use them in vCD…
Beware of Orphaned VLAN references in the vCD Databases
http://kb.vmware.com/kb/2003988
20. +
Work out your IP before you start!
“Wrong”
192.168.3.x – “External Network”
172.168.x.x – “Organization Network”
10.x.x.x – “vApp Network”
“Right”
10.x.x.x– “External Network”
172.168.x.x – “Organization Network”
192.168.1.x – “vApp Network”
Keep it simple – whole ranges dedicated
21. +
IP Ranges can be tricky to change
Even with vApps powered off – options unavailable
Gateway Address
Network Mask
Resolution involves admin:
Add new vApp Network
Remap all VMs to new vApp Network
Remove old vApp Network
22. +
vApp Networks & Edge Gateway
Every vApp Network you create:
Creates a vCNS Edge Gateway
Consumes resources
Solution
Create two vApps per Organization
TypeA: One on the Organization Network
TypeB: One on its own vApp Network
Power off the Type B vApp to save resources
Beware of static MAC/IP on Power Offs
23. +
Establish a meaningful
naming convention…
I KNOW EVERYONE SAYS THIS, BUT IN A HOME LAB
DON’T YOU CUT CORNERS SOMETIMES?
<ORGNAME><NetworkType><Purposes>
CORPHQ-OrgNetCorp-EdgeGateway
CORPHQ-vAppNet-WebGateway
Makes screengrabs, documentation & troubleshooting soooo
much easier…
Register Edge Gateway devices in DNS…
Helps with SysLog – watch out for stale DNS Records…
JOKE: Yeah, I did try to install ESX 2.x to a IDE PC and found it would see the disk. Slides 4-10 I will run through very quickly. I mean less than minute per slide… I could hide slides 5-9 and just show I went from Zero to Colo…