The benefits of cloud are indisputable. Storage, however, remains a complex, expensive aspect of setting up a cloud ― one you can’t afford to get wrong. When it comes to storage for OpenStack, one size doesn’t fit all, and you need to choose the right tool for the job.
Take a look at this presentation to learn:
* What workloads are best suited for performance-optimized block storage
* What storage features are critical to the success of your OpenStack cloud
* How and where to utilize complementary object storage
4. From Virtualization to Orchestration
▪ First there was virtualization…and it was good
▪ For smaller scale use cases it still is good
▪ But, when scaling virtual environments…
▪ Hassle of adding and deploying hypervisors
▪ Storage performance degradation
▪ Networking headaches
▪ Management complexity
▪ Something had to change
5. But Why Adopt OpenStack?
Source: http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
Ability to innovate When infrastructure maintenance ceases to consume spare cycles, time can
be spent focusing instead on innovating features and functionality
Open technology Open source software provides greater flexibility, interoperability and the
ability to try it out before buying
Cost savings Open source technology eliminates most, if not all, of the costs of initial
purchase, licensing and expensive support renewals
Avoiding vendor
lock-in
You are no longer beholden to one vendor for products, services, proprietary
APIs or subject to onerous switching costs
7. Making choices
can be the
HARDEST part!
● Each storage has its own merits
● Some excel at specific use cases
● Maybe you already own the gear
● TCO, TCO, TCO
Ask yourself:
➔ Does it scale?
➔ Is the architecture a good fit?
➔ Is it tested, will it really work in OpenStack?
➔ Support?
➔ What about performance and noisy neighbors?
➔ Third party CI testing?
➔ Active in the OpenStack Community?
➔ DIY, Services, both/neither?
8. ● Ephemeral
● Non-Persistent
● Life Cycle coincides with an Instance
● Usually local FS/QCOW file
● Object
● Manages data as... an “Object”
● Think images etc
● Typically “cheap and deep”
● Predominantly SWIFT
● Shared FS
● We all know and love NFS
● Soon to be Manila
Types of Storage in OpenStack, and example use cases
● Block
● Foundation for the other types
● Think raw disk
● Typically higher performance
● Cinder
9. What’s the difference between block and object?
Cinder / Block Storage Swift / Object Storage
Objectives
● Storage for running VM disk volumes
on a host
● Ideal for performance sensitive apps
● Enables Amazon EBS-like service
● Ideal for low cost, scale-out storage
● Fully distributed, API-accessible
● Well suited for backup, archiving, data retention
● Enables Dropbox-like service
Use Cases
● Production Applications
● Traditional IT Systems
● Database Driven Apps
● Messaging / Collaboration
● Dev / Test Systems
● VM Templates
● ISO Images
● Disk Volume Snapshots
● Backup / Archive
● Image / Video Repository
Workloads
● High Change Content
● Smaller, Random R/W
● Higher / “Bursty” IO
● Typically More Static Content
● Larger, Sequential R/W
● Lower IOPS
11. Cinder Mission Statement
To implement services and libraries to provide on demand, self-service access
to Block Storage resources.
Or..
Virtualize various Block Storage devices and abstract them in to an easy self serve
offering to allow end users to allocated and deploy storage resources on their own
quickly and efficiently.
12. The main points
● Goal as with other OpenStack Services is we want to automate EVERYTHING
● Resources (including storage) should be on-demand and pay as you go
● Allocate only what you need
● Make things as easy as possible, but don’t sacrifice capabilities
13. Quick look at design
● Cinder provides a REST API with usage calls; create, attach, delete….
● Includes a reference implementation built on LVM
● Can also use various third party storage arrays/devices
● Cinder provides interface, coordinating and managing the storage device
● Devices provide a driver to act as the bridge
● Mix and match
Graphic representation helps, let’s take a look....
16. SolidFire All-Flash Array
Scale-out high performance storage systems
designed for large scale infrastructure
▪ Most Scalable All-Flash Storage System
▪ 4 – 100 nodes, 35TB – 3.4PB, 7.5M IOPS
▪ Industry-standard hardware, 10 GigE iSCSI, 16/8 Gb FC
▪ 20X performance of traditional SANs
▪ 10X reduction in operational cost
▪ Most complete enterprise feature set of any all-flash array
17. SolidFire & Orchestration
Native multi-tenant architecture, best-in-class integrations
Flexibility
Control
Time to Value
Mixed Workloads
18. More than just “another” OpenStack driver
● It’s about more than just “We have a driver”
○ We’re driving OpenStack and Cinder to make it better
○ We’re better when OpenStack is better
○ Truly changing the way the World uses OpenStack
○ It’s not just about commit counts or participation, it’s what you do with
those investments
● OpenStack is our “passion”
19. Edit the cinder.conf file:
volume_driver=cinder.volume.solidfire.SolidFire
san_ip=172.17.1.182
san_login=openstack-admin
san_password=superduperpassword
OpenStack Supports Multiple Back Ends
Configured in under a minute
Configuring SolidFire Cinder Driver