The complexity of a typical OpenNebula installation brings a special set of challenges on the monitoring side. In this talk, I will show monitoring of a full stack of from the physical servers to storage layer and ONE daemon. Providing an aggregated view of this information allows you see the real impact of a certain failure. I would like to also present a use case for a “closed-loop” setup where new VMs are automatically added to the monitoring without human intervention, allowing for an efficient approach to monitoring the services a OpenNebula setup provides.
2. Hi! That‘s me!
UnixSysadmin / freelance consultant.
Storage
virtualiztion
monitoring
HA clusters
Backups (if you had them)
Bleeding edge software (fun but makes you grumpy)
3. What else?
• Created first embedded Xen Distro (and other weird
things)
• Training: Monitoring, Linux Storage (LVM, Ceph...)
• On IRC @darkfader, on Twitter @FlorianHeigl1
Making monitoring more useful is <H1> for me.
reap the benefits!
4. OpenNebula
My love:
• Abstraction / Layering (oZones, VNets, Instantiation)
• Hypervisor abstraction (write a Jail driver and a moment
later it could set up FreeBSD jails)
• Something happens if you report a bug.
My hate:
• Feature imparity
• Complexity „spikes“
• Unknown states
• Scheduler
5. We‘ve all run Nagios once?
Not new:
• Systems and Application Monitoring
• Nagios
But:
• #monitoringsucks on Twitter is quite busy
• Managers still unhappy?
6. Interruption
How come there were no checks for OpenNebula?
• Skipped a few demos
• Added checks so I can actually show *something*
• https://bitbucket.org/darkfader/nagios/src/
7. Monitoring Systems
• Keep an eye out for redundancy
• monitor everything. EVERYTHING. monitor!
• But think about „capacity“
• I don‘t care if my disk does 200 IOPS (except when i‘m
tuning my IO stack)
• I do care if it‘s maxed!
• My manager doesn‘t care if it‘s maxed?
8. Monitoring Applications
• We know how to monitor a process, right?
Differentiate:
• Checking software components
I don‘t care if a process on one HV is gone.
Nor does the mananger, nor does the customer.
• End-to-End checks
Customers will care if Sunstone dies.
Totally different levels of impact!
9. Monitoring Apps & Systems
Chose strategy:
• Every single piece (proactive, expensive)
• Something hand-picked (reactive)
Limited by resources, pick monitoring functionality over
monitoring components.
Proactively monitoring something random?
Doesn‘t work.
10. Examples
• This is so I don‘t forget to give examples for the last slide.
• So, lets go back.
11. Dynamic configuration
• You might have heard of Check_MK and inventory. Some
think that‘s it.
• But... sorry... I won‘t talk (a lot) about that.
• We‘ll be talking about dynamic configuration
• We‘ll be talking about rule matching
• We‘ll be talking about SLAs
12. Business KPIs
• „Key Performance Indicators“
• Not our kind of performance.
• I promise there is a reason to talk about this
Were you ever asked to provide
• Reports and fancy graphs
• What impact a failure is going to have
As if you had a damn looking glass on your desk, right?
13. The looking glass
• Assume, we know how to monitor it all.
• Let‘s ask what we‘re monitoring.
15. Ponder on that:
• All your aircos with their [redundancy] failed.
• Isn‘t your cloud still [available]?
• Your filers are being trashed by the Nagios VM, crippling
[performance]. Everything is still [available], but cloning a
template takes an hour.
• Will that impact [business operations]?
16. Ponder on that too:
Assume you‘re hosting a public cloud.
How will your [business operations] lose more money:
1. A hypervisor is no longer [available] and you even lose
5 VM images
2. Sunstone doesn‘t work for 5 hours
Disclaimer: Your actual business‘ requirements may differ from this example.
J
17. Losing your accounting...
„das ist ganz schlecht.
dadurch funktioniert eine ganze Reihe von Dingen nicht mehr.
z.B. Strom u. Traffic-Accounting im RZ, Anlage und Verwaltung
von Domains etc. das müssen wir ganz schnell fixen, sonst
können wir !nichts abrechnen! da nichts geloggt wird, nix
anlegen und nichts nachsehen.“
Very recent example:
18. That KPI stuff creeps back
• All VMs are running, Sunstone is fine. Our storage is low
util, lot of capacity for new VMs
• => [availability] [redundancy] [Peformance] is A+
• But you have a BIG problem.
• You didn‘t notice, because you „just“ monitored that every
piece of „the cloud“ works.
• Customers are switching for another provider!
• Couldn‘t you easily notice anyway?
19. Into: Business
• VM creations / day => revenue
• User registrations / day => revenue
• Time to „bingo point“ for storage
Those are „KPIs“.
Talk to boss‘s boss about that.
You could:
• Set alert levels for revenue
• Set alert levels for customer aquisitions
• Set alert levels on SLA penalties
21. Into: Business
• VM creations / day => revenue
• User registrations / day => revenue
• Time to „bingo point“ for storage
Those are „KPIs“.
Talk to boss‘s boss about that.
You could:
• Set alert levels for revenue
• Set alert levels for customer aquisitions
• Set alert levels on SLA penalties
22. Into: Availability
• Checks need to be reliable
• Avoid anything that can „flap“
• Allow for retries, even allow for larger intervals
• „Wiggle room“
• Reason: DESTROY any false alerts
• Invent more End2End / Alive Checks
Nagios/Icinga users:
• You must(!) take care of Parent definitions
23. Example: Availability
• checks that focus on availability
• Top Down to
• „doesn‘t ping“
• Bonded nic
• missing process
Aggregation rules:
• „all“ DNS servers are down
• bus factor is „too low“
• Can your config understand the SLAs?
24. Into: Performance
• Constant, low intervals
• One thing measured at multiple points
• Historical data and prediction the future
• Ideally, only alert based on performance issues
• Interface checks, BAD!
• one alert for two things? link loss,BW limit, error rates
• => maybe historical unicorn/s?
• => loses meaning
25. Example: Performance
Monitoring IO subsystem
• Monitoring Disk BW / IOPS / Queue / Latency!
• Per Disk (xxx MB/s, 200 / 4 / 30ms)!
• Per Host (x GB/s, 4000 / 512 / 30ms)!
• Replication Traffic % Disk IO % Net IO!
Homework: Baseline / Benchmark
Turn into „Power reserve“ alerts, aggregate over all hosts.
• Nobody ever did it.
• Nobody stops us, either
27. Capacity?
Turn some checks into „Power reserve“ alerts.
Nobody ever did it.
Nobody stops us, either.
Example: one_hosts summary check.
aggregate over all hosts.
28. Into: Redundancy
Monitor all components, sublayers making them up.
Associate them:
• Physical Disks
• SAN Lun, Raid Vdisk, MD Raid volume
• Filesystem...
Make your alerting aware.
Make it differentiate...
29. Example: Redundancy
Why would you get the same alert for:
• Broken disk in a raid10+HSP under a DR:BD volume?
• A lost LUN
• A crashed storage array
What are your goals
• for replacing a broken disk that is protected
• for MTTR on a array failure
=> you really need to adjust your „retries“
30. Create rules to bind them
• An eye on details
• Relationships
• Impact analysis
• Cloud services: Constantly changing platform
⇒ Close to impossible to maintain manually
⇒ Infra as Code is more than a Puppet class adding a
dozen „standard“ service checks.
31. Approach
1. Predefine monitoring rulesets on expectations
2. Externalize SLA info (thresholds) for rulesets
3. Create Business Intelligence / Process rulesets that
match on attributes (no hardwire of objects)
4. Use live, external data for identifiying monitored objects
5. Handling changes: Hook into ONE and Nagios
6. Sit back, watch it fall into place.
32. Predefine rules
ONEd must be running on Frontends
Libvirtd must be running on HV Hosts
KVM must be loaded on HV Hosts
Diskspace on /var/libvirt/whatever must be OK on HV Hosts
Networking bridge must be up on HV Hosts
Router VM must be running for networks
33. Externalize SLAs
• IOPS reserve must be over <float>% threshold
• Free storage must be enough for <float>% hours‘ growth
plus snapshots on <float>% of existing VMs
• Create a file with those numbers
• Source it and fill the gaps in your rules simply at config
generation time
34. Build Business aggregations
ONEd must be running on Frontend
Libvirtd must be running on HV Hosts
KVM must be loaded on HV Hosts
Diskspace on /var/libvirt/whatever must match SLA on HV
Hosts
Networking bridge must be up on HV Hosts
Router VM must be running for networks
-> Platform is available
35. Live data
• ONE frontend nodes know about all HV hosts
• All about its ressouces
• All about its networks
• So lets source that.
• Add attributes (which we do know) automatically
• The rules will match on those attributes
for _vnet in _one_info[vnets].keys():!
checks += [([ „one-infra“ ], „VM vrouter-%s“ % vnet )]!
36. We can haz config!
• Attributes == Check_MK host tags
• Check_MK rules made on attributes, not hosts etc.
• Rules suddenly match as objects are available
• Rules inherit SLA data
• Check_MK writes out valid Nagios config
=> The pieces have fallen
37. Change... happens
• We now have a fancy config.
But... Once Nagios is running, it‘s running.
• How will Check_MK detect new services (i.e. Virtual
Machines)?
• How will you not get stupid alerts after onehost delete
• How will a new system be added into Nagios
automatically?
Please: don‘t say crontab! Use Hooks!
38. How do I use this
OpenNebula Marketplace:
• Would like to add preconfigured OMD monitoring VM
• Add context: SSH info for ONE frontend
• Test, poke around, ask questions, create patches
40. Monitoring
3 Monitoring Sites
• Availability
• Capacity
• Business Processes
Use preconfigured rulesets
...that differ.
Goal: Nothing hardcoded
41. Monitoring
Different handling:
Interface link state -> Availability
Interface IO rates -> Capacity
Rack Power % -> Capacity
Rack Power OK -> Availability
Sunstone:
Availability
Business Processes
42. Interface
1. HOOK injects services (or hosts)
2. Each monitoring filters applicable
3. Rulesets immediately apply to new objects
• Central Monitoring to aggregate (...them all)