Reference architecture with MIRANTIS OPENSTACK PLATFORM.The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model.
1. 1EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
EMC OPENSTACK CLOUD SOLUTIONS
REFERENCE ARCHITECTURE WITH MIRANTIS OPENSTACK PLATFORM
2. 2EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
IT AS A SERVICE DELIVERS BUSINESS AGILITY
IT as a Service
Broker & Builder
New Business Model
New Technology Architecture
New Operation models and roles
Cost Efficiency
CULTURE
Open Source
AgileApps
Big Data
TECH BUSINESS
DevOpsMobile Apps
Customer Data
Speed
3. 3EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
OpenStack As An Enabler For
Transformation
Metering
Engine
Service
Catalog
Orchestration
Engine
User Portal
Policy
Engine
Dev-Ops
New
Roles
Agile
Processes
New Apps
Application Fabric
Data Fabric
Lends itself nicely to
3rd Platform Apps
Developer Friendly
Cloud Software
Platform a
foundation for
SDDC
enablement
API provide
capability to
Automate
Services for Cost
Effective
Operations.
Need new skill
sets and roles
PaaS
SOFTWARE DEFINED DC
TRANSFORMATION
Service APIs
4. 4EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Why OpenStack?
COST SAVINGS
OPERATIONAL
EFFICIENCY
OPEN
PLATFORM
CHOICE OF
TECHNOLOGY
INNOVATE
AND COMPETE
source: OpenStack User Survey, 2014
http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
5. 5EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
What Is OpenStack?
• Flexible and modular architecture. Foundation for a Software Defined DC.
• Delivering IaaS service : compute, networking & storage services and more.
• Analogous to the Linux kernel (very tunable)
• All services are expose via API (Infra as code)
6. 6EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
NEW USE CASES
Digital Experience
Real-time Analytics
EXISTING
APPLICATION
INVENTORY &
STRATEGY
Application RightFit
SOFTWARE
DEFINED
DATACENTER
3RD GEN APPS AND
DATA PLATFORM
Re-write / Replace
Leave in
place/Retire
Refactor / Migrate
PLATFORM 2.0
PLATFORM 2.5
PLATFORM 3.0
PLATFORM 1.0
7. 7EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Platform Definition
No-SQL
Components in
Monolithic Applications
Components re-architected
loosely coupled, elastic, fault
tolerant
Relational In Memory,
Distributed
Kernel Virtualization Kernel Virtualization / Containers
Platform2
Platform3
8. 8EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Personas
Administrators responsible for managing and maintaining an IT infrastructure (in a
private cloud) Years of experience with Unix and Linux systems administration.
Manages IT infrastructure, hypervisors and Cloud platform. Interested in how to
deal with failure (planned, unplanned), maintenance of system and utilization.
Enterprise Admin
Cloud Admin
Proficient in administering Unix and Linux systems. Competent shell and Python
programmer. Early adopter of Puppet. Already using AWS for IaaS service
Dev-Ops
Have been using AWS for a while. Primarily developing web applications for
internal usage. API driven. Will integrate with the CI/CD tools and open to
OpenSource.
Clear and efficient catalogue to manage the infra lifecycle
Need a catalogue or CLI for initial deployment rest done via API calls
Management interface for utilization, quotas etc. APIs to integrate into tools
9. 9EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
OpenStack Framework
• Currently 14 integrated projects within OpenStack
• All these projects communicate via public API’s
• Quite a few new projects focused on Mgmt and Operations
• Service have behavioral compatibility with AWS
Horizon
Dashboard
Swift
object store
Glance
image store
Nova
compute node
Cinder
volume service
Keystone
identity service
Heat
Orchestration
Celiometer telemetry service
Trove
database
Neutron
networking
S3 EC2 EBS vPC RDSAMI
IAMCloud
Formation
10. 10EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
OpenStack Drivers
EMC Integration: OpenStack
Delivers On
Speed And
Space
Flash
Performance
Low $ Per
Transaction
Any Workload
Hyper
Converged,
S/W Defined
Use Your
Hardware
Broad Portfolio
Fit Your Environment
Evolve With Your Cloud
Reduce
Deployment
Costs
File or Block
Hybrid
Software
Defined
Efficient
Management
Isilon
Data Lake
Scale out File
and Object
System
11. 11EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
TECHNICAL EVIDENCE SOLUTION
REFERENCE ARCHITECTURE WITH MIRANTIS – JUNO RELEASE
12. 12EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
EMC + Mirantis Technical Evidence
Storage Arrays
Certified &
Validated Designs
Partner Tools
Integration
Cooperative
Support
Joint
Services
EMC
• Solution Focused
• Partnered with Mirantis and provide Validated reference designs.
• Integrated with Mirantis tool set to enable better manageability.
• Joint Service and Support
13. 13EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Mirantis OpenStack
• The most robust
OpenStack
distribution on the
market
• Fuel takes the
guesswork out of
deployment
• Broad choice of
fully-tested
technologies
Simply download, boot, and deploy Mirantis
OpenStack
14. 14EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Distro: Production-Ready Packages
• Fundamental components
– Core OpenStack
– Key Projects
– Plug-ins & Drivers
• Continuous verification
and community contribution
– Solid Reference Architecture
– Continuous Integration and Delivery
– Real-world operation at scale
15. 15EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
EMC Reference Architecture with Mirantis
OpenStack
Cinder Drivers
16. 16EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Solution Components
Capability Components Supported
Hardware VNX
XTREMIO
SCALEIO
- iSCSI, FC,
- iSCSI, FC
- SDC
Software Mirantis Open Stack Juno Release
Software Cinder Block Driver Juno Release
Software CentOS Operating system v6.5
kernel (2.3.2)
Software KVM Hypervisor in the
CentOS Kernel
Software Tools Mirantis Fuel Version 6.0
18. 18EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Fuel: Deployment and Management
• GUI driven experience for
– Automated deployment of OpenStack
– Guided configuration & management
• Flexible technology choices
• Production-ready HA deployment
• Health validation
– Network verification
– Deployment validation
– Cloud health checks
19. 19EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Unified Block and File Storage system
• Cinder Supported Protocols
– FC and iSCSI
• Supports all the main volume
operations.
• FAST, FastCache, FC SAN Zoning.
• Integrated into OpenStack trunk
VNX
UnifiedHybridStoragefortheMid-Range
UNIFIED
All mixed
workloads
All access
protocols
HYBRID
Optimized for
FLASH
Benefits of tiered
storage
PRICE
OPTIMIZED
Lowest $/IO
Lowest $/GB
Technology
Leadership
Multicore
Optimized
Designed for
Virtualization
Unified Storage
File and Block
OpenStack Cinder
Cinder Driver
$ cinder type-create "AutoTieringVolume"
$ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto
fast_support=True fast_cache_enabled=True
20. 20EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Cinder.Conf - VNX
enabled_backends=vnxiscsi
storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True
Volume_backend_name=vnx_40
FC Driver
Enabled_backends=vnxfc
storage_vnx_pool_name = Pool_02_SAS
san_ip = 10.10.72.41
storage_vnx_security_file_dir = /etc/secfile/array1
storage_vnx_authentication_type = global
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
destroy_empty_storage_group = False
volume_backend_name = vnx_41
iSCSI Driver
Location : /etc/cinder/cinder.conf
• Specify the volume drivers in the cinder.conf file
• Restart cinder-volume service to make any configuration change to take effect.
22. 22EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• All Flash array ideal for High Performance
• Scale Out Architecture
– Scale storage resources together with cloud
infra
• Supported Protocols:
– FC and iSCSI
• Provide support for main Volume
Operations
• Integrated into OpenStack trunk
XTREMIO
28. 28EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Certified Volume Operations
VNX Extreme IO Scale IO
Create, Delete, Extend Volume Create, Delete, Extend Volume Create, Delete, Extend Volume
Snapshot volume , Delete snapshots Snapshot volume , Delete snapshots Snapshot volume , Delete snapshots
List volume and snapshots List volume and snapshots List volume and snapshots
Attach, Detach volume Attach, Detach volume Attach, Detach volume
Create volume from snapshot Create volume from snapshot Create volume from snapshot
Copy image to volume and volume to
image
Copy image to volume and volume to
image
Copy image to volume and volume to
image
Clone Volume Clone Volume Clone Volume
Create volume with backend Create volume with backend
Migrate volume, Retype a volume
Create and Delete Consistency Groups
Create and Delete Consistency Group
Snapshots
https://wiki.openstack.org/wiki/CinderSupportMatrix
30. 30EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Cinder – Block Storage Service
• Persistent block level storage devices for use
with OpenStack compute instances.
• Manages the creation, attaching and
detaching of the block devices to servers
• Block storage volumes are fully integrated
into OpenStack Compute and the Dashboard
allowing for cloud users to manage their
own storage needs.
• Snapshots are supported and can be
restored or used to create a new block
storage volume.
31. 31EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Cinder Capabilities
• Volumes:
– Allocated block storage resources that can be attached to instances as secondary
storage or they can be used as the root store to boot instances. Volumes are
persistent R/W block storage devices most commonly attached to the compute node
through iSCSI.
• Snapshots :
– A read-only point in time copy of a volume. The snapshot can be created from a
volume that is currently in use (through the use of --force True) or in an available
state. The snapshot can then be used to create a new volume through create from
snapshot.
• Backups:
– An archived copy of a volume currently stored in OpenStack Object Storage (swift).
32. 32EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Attached to instances as secondary
storage
• Can be used as root store to boot
instances
• Persistent R/W Block storage
• Manage volume lifecycle
– Create, Delete, Extend volumes
– Attach/Detach Volume
• Ability to create different volume type.
Cinder Capabilities : VOLUME
33. 33EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• A read-only point in time copy of a
volume
• Create snapshots, Delete snapshots
• Make volume out of the created
Snapshots
Cinder Capabilities : Snapshots
34. 34EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Backup Operations is an admin
task and done via CLI today
• Backup is to Swift (Object).
• Find the volume you want to
backup.
– Create backup of a volume
– Make sure of backup container.
– Restore the volume
CINDER Capabilities - BACKUP
$ cinder backup-create “volume_id”
$ swift list
$ cinder backup-restore “BACKUP_ID”
$ cinder list
35. 35EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Consistency Groups
• Today in Cinder, every operation happens at the volume level.
Consistency Groups (CGs) enable
– Data Protection (snapshots and backups)
– Disaster Recovery (remote replication)
• Consistency Group function
– Leverages volumes of same type to be part of CG so can be snapshot/backed up
– Enable Cinder to leverage volume replication feature available in the storage
backends (drivers).
– Orchestration layer above Cinder that understands which volumes should be grouped
together.
36. 36EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Consistency Groups
• Caveats
– Allow for snapshot of multiple volumes
– Make sure the “storage platform” supports consistency group (ex: VNX)
– Can set Consistency groups only via CLI ; no support from Portal yet
– Certain operations are not permitted if a volume is in a consistency group
• Volume Migration, Volume Re-Type, Volume deletion.
• A consistency group has to deleted as whole with all volumes and
same for volume snapshots.
37. 37EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• High availability for Cinder
– Deploy a Multi-Node with HA OpenStack
environment.
– Cinder services can be installed on each controller
and provide high availability in case of a controller
reboot or loss.
– If a controller is lost all control plane functions are
lost the data plane works.
High Availability
Controller-1
Controller-2
Message Q
Database
API Services
Identity
Image
Blk Storage
Dashboard
38. 38EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Admins have the capability to group tenants
– Using Projects
– Map specific users who can access the
project.
• Quotas can be set for operational limits
– Enforced per tenant (project) level
• Number of volumes
• Number of volume gigabytes allowed per
• Number of Block Storage snapshots allowed
Projects and Quotas
39. 39EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Configuration File: Cinder.conf
enabled_backends=XtremeIO, VNX
[XtremeIO]
volume_driver = cinder.volume.drivers.emc.xtremio XtremIOIscsiDriver
volume_backend_name=xtremIO_40
[lVNX]
storage_vnx_pool_name = Pool_01_SAS
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
volume_backend_name=vnx_41
• Map the backend to volume types
$ cinder type-create "HighPerf”
$ cinder type-key "HighPerf” volume_backend_name=xtremeIO_40
$ cinder type-create ”MedPerf”
$ cinder type-key ”MedPerf” volume_backend_name=vnx_41
MULTI-BACKEND SUPPORT
Cinder-Volume
High Perf Med Perf
Cinder-
driver
Cinder-
driver
40. 40EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Log files used by Block Storage
– Log file of each Block Storage service is stored in
the /var/log/cinder/ directory of the host
– Most Block Storage errors are caused by
incorrect volume configurations that result in
volume creation failures. To resolve failures,
review logs:
• cinder-api log (/var/log/cinder/api.log)
• cinder-volume log (/var/log/cinder/volume.log)
• Forward the logs to syslog server
Logging - Cinder
OpenStack
Controller +
Data Plane
Local log
files
Rsyslog
pull
Logstash
ElasticsSearch
Kibana
http://docs.openstack.org/openstack-
ops/content/logging_monitoring.html
41. 41EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
• Volume Stats
– Health, Size, Usage.
– Thresholds for alarms
• The data can be used by external
systems for
– Metering/chargeback
– Monitoring.
Monitoring - CEILOMETER
Notification BUS
Volume
Notification
Agents Collectors
External Systems
http://docs.openstack.org/openstack-ops/content/index.html
42. 42EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Volume Type
$ cinder type-create "ThickVolume"
$ cinder type-create "ThinVolume"
$ cinder type-create "DeduplicatedVolume"
$ cinder type-create "CompressedVolume"
$ cinder type-key "ThickVolume" set storagetype:provisioning=thick
$ cinder type-key "ThinVolume" set storagetype:provisioning=thin
$ cinder type-key "DeduplicatedVolume" set storagetype:provisioning=deduplicated
deduplication_support=True
$ cinder type-key "CompressedVolume" set storagetype:provisioning=compressed
compression_support=True
$ cinder type-create "HighPerf"
$ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH
volume_backend_name=vnx_41
• User wants to create a volume on a certain storage pool, a volume type with an
extra spec specified the storage pool should be created first, then the user can use
this volume type to create the volume.
46. 46EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Cinder Architecture Building Blocks
• Cinder API
– A WSGI app that authenticates and routes requests throughout the Block Storage
service. It supports the OpenStack APIs
• Cinder Scheduler
– Schedules and routes requests to the appropriate volume service. Depending upon
THE configuration, could be simple round-robin scheduling or it can be more
sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the
default and enables filters on things like Capacity, Availability Zone, Volume Types,
and Custom filters
• Cinder Volume
– Manages Block Storage devices, specifically the back-end devices themselves
• Cinder Backup
– Provides a means to back up a Block Storage volume to OpenStack Object Storage.
48. 48EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Authentication - Keystone
• Provide credentials to
authenticate to the system.
• Admin
• User
• Credentials used by all services
to talk to each other
50. 50EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Volume Creation - Cinder
Group volumes based on performance
SIze
Data Volume
Boot Volume
Defaults to Nova-AZ if not
specified
52. 52EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Managing the volumes
Increase the volume size
Delete the volumes
Creates snapshots of volumes
53. 53EMC CONFIDENTIAL—INTERNAL USE ONLYEMC CONFIDENTIAL—INTERNAL USE ONLY
Launching an instance- Nova
Flavor
Count
Image
• Initiate creation of an
instance.
• Based on flavor
• Based on number
• Based on AZ
The changes that are going on in IT with disruptions from technology, business and culture and so IT to solve the issues has to change from moving from traditional models to broker provider model. For this to occur they need to drive to a Hybrid IT model where they can need to adopt a Cloud Framework.
Adoption of a hybrid IT model for capability and capacity.
Manage risk across the cloud supply chain.
Visibility, Governance and control across Clouds.
OpenStack is an enabler for IT Organizations to build a Private Cloud geared towards a S/W defined model and at same time effectively provide the capability to move towards a Dev-Op model.
EMC solutions and service will deliver a service which enables cloud management, right applications and application architectures to run on the Cloud and provide dev-ops advisory services.
Lends itself to Applications which uses API to ask for “sets of resources”…OpenStack provides open standard API which can be consumed and OpenStack controller has the capability to schedule the reosource in an optimal way and manage the resources.
API lends itself for dev-ops where operators can automate infrastructure pre and post using Puppet/Chef and openstack has good integration with Config Mgmt tools and CI/CD tools.
Reasons customers are giving for why the want OpenStack (Drive each to agility when natural in the talk track)
Operational Efficiency – OpenStack designed for self-service; end-users get to resources more quickly
Open Platform – Open source; avoids vendor lock-in and leverages an entire community
Choice – large ecosystem of vendors to choose from to deploy OpenStack – hardware/software agnostic Choose your own storage/network etc.
Innovate and Compete – more agile; design for Platform 3 – use case changes come from community and can head towards the new; Competes on next generation applications
Talk about OpenStack how the architecture is modular and flexible – you can pick the service you need
Delivers IaaS servie
Platform 1: Mainframe era with mainframe application workloads
Platform 2: Client-Server and virtualized x86 traditional app workloads
Platform 3: Application built on NGN architecture for cloud, social, mobile and big data
Platform 2:
Monolithic Applications
Workloads Scale Up
Applications Expect Resilient Infrastructure
Infrastructure Provides Resiliency
High Degree of Virtualization
IT Operational Processes Largely Unchanged
Platform 2.5:
Monolithic Applications
Workloads Scale Up
Applications Assume Resilient Infrastructure
IT Process Automation – Accelerate/Automate IT Processes
More Agile IT
More Agile DevOps Focus
Platform 3
Apps Loosely Coupled, Small Components
Stateless Execution Modules
Application Takes Responsibility of Resiliency, Fault Tolerant.
Assume High Data Resiliency
Workloads Easily Scale Out
DevOps Focus
Cluster Mgmt (IT Automation)
- Config/package mgmt --- what/how do things get installed
Deployment
Naming
Monitoring
How to deal with failures….: planned and unplanned….? Utilization
Maintainence
Capistrano for deployment…
Enterprise Infra Admin
Needs: A great management interface for hardware resource utilization, quotas, and good back-ups and a recovery plan. A user friendly administrative GUI interface, and a logically set out and explained command line instruction set. Well documented APIs to integrate into other tools.
Cloud Admin:
Needs: A great management interface for hardware resource utilization, quotas, and good back-ups and a recovery plan. A user friendly administrative GUI interface, and a logically set out and explained command line instruction set. Well documented APIs to integrate into other tools.
Dev-Ops/Developer
Needs: A user friendly GUI interface, and a logically set-out and explained command line instruction set. Integration with her preferred CI tool.
Trove and Sahara
Framework…
Isilon - ADD
XtremIO
ScaleIO
ViPR
Large service
Fuel
Automated Install
Full stack support
Robust OpenStack distribution; Best in Class
Recognized OpenStack training
Best in class storage
Wide storage portfolio
Cinder Project Leadership
Already a big contributor
Leverage Community and listen to our customers to Continually improve and innovate EMC Cinder Storage Drivers
Become larger contributor
Fuel is an open source deployment and control plane for OpenStack. Developed as an OpenStack community effort, it provides an intuitive, GUI-driven experience for automated deployment and management of OpenStack, related community projects and plugins.
Fully automated storage tiering support
VNX supports Fully automated storage tiering which requires the FAST license activated on the VNX. The OpenStack administrator can use the extra spec key storagetype:tiering to set the tiering policy of a volume and use the extra spec key fast_support=True to let Block Storage scheduler find a volume back end which manages a VNX with FAST license activated. Here are the five supported values for the extra spec key storagetype:tiering:
StartHighThenAuto (Default option)
Auto
HighestAvailable
LowestAvailable
NoMovement
Tiering policy can not be set for a deduplicated volume. The user can check storage pool properties on VNX to know the tiering policy of a deduplicated volume.
Here is an example about how to create a volume with tiering policy:
$ cinder type-create "AutoTieringVolume"
$ cinder type-key "AutoTieringVolume" set storagetype:tiering=Auto fast_support=True
$ cinder type-create "ThinVolumeOnLowestAvaibleTier"
$ cinder type-key "CompressedVolumeOnLowestAvaibleTier" set storagetype:provisioning=thin storagetype:tiering=Auto fast_support=True
FAST Cache support
VNX has FAST Cache feature which requires the FAST Cache license activated on the VNX. The OpenStack administrator can use the extra spec key fast_cache_enabled to choose whether to create a volume on the volume back end which manages a pool with FAST Cache enabled. The value of the extra spec key fast_cache_enabled is either True or False. When creating a volume, if the key fast_cache_enabled is set to True in the volume type, the volume will be created by a back end which manages a pool with FAST Cache enabled.
Storage group automatic deletion
For volume attaching, the driver has a storage group on VNX for each compute node hosting the vm instances that are going to consume VNX Block Storage (using the compute node's hostname as the storage group's name). All the volumes attched to the vm instances in a computer node will be put into the corresponding Storage Group. If destroy_empty_storage_group=True, the driver will remove the empty storage group when its last volume is detached. For data safety, it does not suggest to set the option destroy_empty_storage_group=True unless the VNX is exclusively managed by one Block Storage node because consistent lock_path is required for operation synchronization for this behavior.
EMC storage-assisted volume migration
EMC VNX direct driver supports storage-assisted volume migration, when the user starts migrating with cinder migrate --force-host-copy False volume_id host or cinder migrate volume_id host, cinder will try to leverage the VNX's native volume migration functionality.
In the following scenarios, VNX native volume migration will not be triggered:
Volume migration between back ends with different storage protocol, ex, FC and iSCSI.
Volume is being migrated across arrays.
Initiator auto registration
If initiator_auto_registration=True, the driver will automatically register iSCSI initiators with all working iSCSI target ports on the VNX array during volume attaching (The driver will skip those initiators that have already been registered).
If the user wants to register the initiators with some specific ports on VNX but not register with the other ports, this functionality should be disabled.
Initiator auto deregistration
Enabling storage group automatic deletion is the precondition of this functionality. If initiator_auto_deregistration=True is set, the driver will deregister all the iSCSI initiators of the host after its storage group is deleted.
Read-only volumes
OpenStack supports read-only volumes. The following command can be used to set a volume to read-only.
$ cinder readonly-mode-update volume TrueAfter a volume is marked as read-only, the driver will forward the information when a hypervisor is attaching the volume and the hypervisor will have an implementation-specific way to make sure the volume is not written.
Multiple pools support
Normally a storage pool is configured for a Block Storage back end (named as pool-based back end), so that only that storage pool will be used by that Block Storage back end.
If storage_vnx_pool_name is not given in the configuration file, the driver will allow user to use the extra spec key storagetype:pool in the volume type to specify the storage pool for volume creation. If storagetype:pool is not specified in the volume type and storage_vnx_pool_name is not found in the configuration file, the driver will randomly choose a pool to create the volume. This kind of Block Storage back end is named as array-based back end.
Here is an example about configuration of array-based back end:
1
2
3
4
5
6
7
8
9
san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first
storage_vnx_security_file_dir = /etc/secfile/array1
storage_vnx_authentication_type = global
naviseccli_path = /opt/Navisphere/bin/naviseccli
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
destroy_empty_storage_group = False
volume_backend_name = vnx_41
In this configuration, if the user wants to create a volume on a certain storage pool, a volume type with a extra spec specified the storage pool should be created first, then the user can use this volume type to create the volume.
Here is an example about creating the volume type:
$ cinder type-create "HighPerf" $ cinder type-key "HighPerf" set storagetype:pool=Pool_02_SASFLASH volume_backend_name=vnx_41Multiple pool support is still an experimental workaround before blueprint pool-aware-cinder-scheduler is introduced. It is NOT recommended to enable this feature since Juno just supports pool-aware-cinder-scheduler. In later driver update, the driver side change which cooperates with pool-aware-cinder-scheduler will be introduced.
Volume number threshold
In VNX, there is a limit on the maximum number of pool volumes that can be created in the system. When the limit is reached, no more pool volumes can be created even if there is enough remaining capacity in the storage pool. In other words, if the scheduler dispatches a volume creation request to a back end that has free capacity but reaches the limit, the back end will fail to create the corresponding volume.
The default value of the option check_max_pool_luns_threshold is False. When check_max_pool_luns_threshold=True, the pool-based back end will check the limit and will report 0 free capacity to the scheduler if the limit is reached. So the scheduler will be able to skip this kind of pool-based back end that runs out of the pool volume number.
FC SAN auto zoning
EMC direct driver supports FC SAN auto zoning when ZoneManager is configured. Set zoning_mode to fabric in back-end configuration section to enable this feature. For ZoneManager configuration, please refer to the section called “Fibre Channel Zone Manager”.
Multi-backend configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[DEFAULT]
enabled_backends = backendA, backendB
[backendA]
storage_vnx_pool_name = Pool_01_SAS
san_ip = 10.10.72.41
#Directory path that contains the VNX security file. Make sure the security file is generated first.
storage_vnx_security_file_dir = /etc/secfile/array1
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True
[backendB]
storage_vnx_pool_name = Pool_02_SAS
san_ip = 10.10.26.101
san_login = username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
volume_driver = cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
destroy_empty_storage_group = False
initiator_auto_registration = True
[database]
max_pool_size = 20
max_overflow = 30
For more details on multi-backend, see OpenStack Cloud Administration Guide.
Force delete volumes in storage groups
Some available volumes may remain in storage groups on the VNX array due to some OpenStack timeout issues. But the VNX array does not allow the user to delete the volumes which are still in storage groups. The option force_delete_lun_in_storagegroup is introduced to allow the user to delete the available volumes in this tricky situation.
When force_delete_lun_in_storagegroup=True is set in the back-end section, the driver will move the volumes out of storage groups and then delete them if the user tries to delete the volumes that remain in storage groups on the VNX array.
The default value of force_delete_lun_in_storagegroup is False.
Questions? Discuss on ask.openstack.orgFound an error? Report a bug against this page
Legal notices
# A list of backend names to use. These backend names should
# be backed by a unique [CONFIG] group with its options (list
# value)
enabled_backends=vnxiscsi,vnxfc
[database]
max_pool_size = 20
max_overflow = 30
[vnxiscsi]
storage_vnx_pool_name=OpenStack_iSCSI
san_ip=192.168.1.30
san_secondary_ip=192.168.1.31
san_login=sysadmin
san_password=sysadmin
storage_vnx_authentication_type=global
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
naviseccli_path=/opt/Navisphere/bin/naviseccli
#Timeout in Minutes
default_timeout = 10
destroy_empty_storage_group = False
initiator_auto_registration = True
volume_backend_name=vnx_iscsi
[vnxfc]
storage_vnx_pool_name=OpenStack_FC
san_ip=192.168.1.30
san_secondary_ip=192.168.1.31
san_login=sysadmin
san_password=sysadmin
storage_vnx_authentication_type=global
volume_driver=cinder.volume.drivers.emc.emc_cli_fc.EMCCLIFCDriver
naviseccli_path=/opt/Navisphere/bin/naviseccli
default_timeout=30
destroy_empty_storage_group=False
volume_backend_name=vnx_fc
All Flash array ideal for High Performance
Scale Out Architecture - scale storage resources together with cloud infrastructure
Supported Protocols:
FC and iSCSI
Provide support for main Volume Operations
Integrated into OpenStack trunk
Setting thin provisioning and multipathing parameters
To support thin provisioning and multipathing in the XtremIO Array, the following parameters from the Nova and Cinder configuration files should be modified as follows:
Thin Provisioning
All XtremIO volumes are thin provisioned. The default value of 20 should be maintained for the max_over_subscription_ratio parameter.
The use_cow_images parameter in thenova.conffile should be set to False as follows:
use_cow_images = false
Multipathing
The use_multipath_for_image_xfer parameter in thecinder.conf file should be set to True as follows:
use_multipath_for_image_xfer = true
[Default]
enabled_backends = XtremIO
[XtremIO]
volume_driver = cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
san_ip = XMS_IP
xtremio_cluster_name = Cluster01
san_login = XMS_USER
san_password = XMS_PASSWD
volume_backend_name = XtremIOAFA
[DEFAULT]
# A list of backend names to use. These backend names should
# be backed by a unique [CONFIG] group with its options (list
# value)
enabled_backends = xioiscsi,xiofc
[xioiscsi]
san_ip=192.168.50.50
san_login=openstack
san_password=Password123!
volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOISCSIDriver
volume_backend_name=xio_iscsi
[xiofc]
san_ip=192.168.50.50
san_login=openstack
san_password=Password123!
volume_driver=cinder.volume.drivers.emc.xtremio.XtremIOFibreChannelDriver
volume_backend_name=xio_fc
Software solution
Elastically can scale
Data protection – replicates twice…supports erasure coding
Cinder driver interfaces between ScaleIO and OpenStack
Presents volumes to OpenStack as block devices available for storage
ScaleIO driver executes volume operations by communicating with the backend ScaleIO components through the ScaleIO REST Gateway
Presentation Layer:
ScaleIO Data Client (SDC)
Block Device Driver
Exposes volumes to applications
Service must run to provide access to volumes
Over TCP/IP
Data Server:
ScaleIO Data Server (SDS)
Abstracts storage media
Contributes to storage pools
Performs I/O operations
ScaleIO Metadata Manager
Not in the data path
Monitoring and Configuration
Holds cluster wide component mapping
The OpenStack Block Storage service provides persistent block storage resources that OpenStack Compute instances can consume. This includes secondary attached storage similar to the Amazon Elastic Block Storage (EBS) offering. In addition, you can write images to a Block Storage device for Compute to use as a bootable persistent instance.
The Block Storage service differs slightly from the Amazon EBS offering. The Block Storage service does not provide a shared storage solution like NFS. With the Block Storage service, you can attach a device to only one instance.
The Block Storage service provides:
The cinder command-line interface provides the tools for creating a volume backup. You can restore a volume from a backup as long as the backup's associated database information (or backup metadata) is intact in the Block Storage database.
Run this command to create a backup of a volume:
$ cinder backup-create VOLUMEWhere VOLUME is the name or ID of the volume.
This command also returns a backup ID.
Use this backup ID when restoring the volume:
$ cinder backup-restore BACKUP_ID
Alternatively, you can export and save the metadata of selected volume backups. Doing so precludes the need to back up the entire Block Storage database. This is useful if you need only a small subset of volumes to survive a catastrophic database failure.
Because volume backups are dependent on the Block Storage database, you must also back up your Block Storage database regularly to ensure data recovery.
Because volume backups are dependent on the Block Storage database, you must also back up your Block Storage database regularly to ensure data recovery.
Export and import backup metadata
A volume backup can only be restored on the same Block Storage service. This is because restoring a volume from a backup requires metadata available on the database used by the Block Storage service.
NoteFor information about how to back up and restore a volume, see the section called “Back up and restore volumes”.
You can, however, export the metadata of a volume backup. To do so, run this command as an OpenStack admin user (presumably, after creating a volume backup):
$ cinder backup-export BACKUP_IDWhere BACKUP_ID is the volume backup's ID. This command should return the backup's corresponding database information as encoded string metadata.
. https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups
https://etherpad.openstack.org/p/icehouse-cinder-continuous-volume-replication-v2
Create a number of volumes in Cinder. should create the CG first, and then associate the volume with it at volume create time
Can we not just "Create within CG" ? Yes, that could be done as well. I think that is what is being proposed.
2. Create a CG, specifying volumes to be added to the CG. And the volume type
You can only have one volume type within a CG
3. Create a snapshot of the CG.
- Cinder API creates cgsnapshot and individual snapshot entries in the db and sends request to Cinder volume node.
- Cinder manager calls novaclient which calls a new Nova admin API "quiesce" that uses QEMU guest agent to freeze the guest filesystem.
Can leverage this work: https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent
- Cinder manager calls Cinder driver.
- Cinder driver communicates with backend array to create a point-in-time consistency snapshot of the CG.
- Cinder manager calls novaclient which calls a new Nova admin API "unquiesce" that uses QEMU guest agent to thraw the guest filesystem.
Need to think about a tool (nova-manage or cinder-manage or similar) to fix things up if cinder goes down between quiesce and unquiesce
Nova will likely be polling for updates and eventually timeout and unfreeze the instance?
4. Create a backup of the CG.
- Cinder backup API creates cgbackup and individual backup entries in the db and sends request to Cinder volume node.
- Cinder backup manager calls novaclient which calls a new Nova admin API "quiesce" that uses QEMU guest agent to freeze the guest filesystem.
Can leverage this work: https://wiki.openstack.org/wiki/Cinder/QuiescedSnapshotWithQemuGuestAgent
- Cinder backup manager calls Cinder driver which calls the backup driver.
- Cinder backup driver communicates with backup backend (swift, ceph, or other vendor specific backends) to create a point-in-time consistency backup of the CG.
- Cinder backup manager calls novaclient which calls a new Nova admin API "unquiesce" that uses QEMU guest agent to thraw the guest filesystem.
If a CG is to be modified by adding or removing volumes, we’ll check whether it already has cgsnapshots and cpbackups. If it does, then the CG cannot be modified.
Currently, a volume can be backed up only when it is available. For creating backups of a CG, we need to support backups when the volume is attached. This needs driver work, so might come later
Quotas are operational limits. For example, the number of gigabytes allowed per tenant can be controlled to ensure that a single tenant cannot consume all of the disk space. Quotas are currently enforced at the tenant (or project) level, rather than the user level.
<a id="d9e5978" class="indexterm">use_syslog=True
syslog_log_facility=LOG_LOCAL2</a>
Elasticsearch - index and enables search and storage
Logstash
Elasticsearch
Kibaba
spluk
Dashboard ("Horizon") provides a web front end to the other OpenStack services Compute ("Nova") stores and retrieves virtual disks ("images") and associated metadata in Image ("Glance") Network ("Quantum") provides virtual networking for Compute. Block Storage ("Cinder") provides storage volumes for Compute. Image ("Glance") can store the actual virtual disk files in the Object Store("Swift") All the services authenticate with Identity ("Keystone")
Cinder API
A WSGI app that authenticates and routes requests throughout the Block Storage service. It supports the OpenStack APIs
Cinder Scheduler
Schedules and routes requests to the appropriate volume service. Depending upon THE configuration, could be simple round-robin scheduling or it can be more sophisticated through the use of the Filter Scheduler. The Filter Scheduler is the default and enables filters on things like Capacity, Availability Zone, Volume Types, and Custom filters
Cinder Volume
Manages Block Storage devices, specifically the back-end devices themselves
Cinder Backup
Provides a means to back up a Block Storage volume to OpenStack Object Storage.
Think of it as a toolkit to build private clouds.
1. The Cinder client, in this case Horizon, makes a request to create a volume on the block storage.
2. The Cinder REST API processes the request and validates it, making sure that the correct credentials are provided. It then places the message into the Cinder Message BUS.
3. The Cinder volume process picks the request from the Message BUS and sends it to the Cinder scheduler to determine which block based storage to provision to based on the capabilities asked for in the request.
4. The Cinder Scheduler now takes the message off of the queue and generates a list of possible storage candidates, based on the capabilities required from the request, such as volume type, sizing etc.
5. The Cinder Volume process reads the response from the Scheduler and looks through the list and invokes the correct storage drivers, for example, in this case the EMC VNX Cinder drivers.
6. The VNX Cinder driver creates the requested storage volume, interacting with the storage sub-systems. For the VNX it will be a direct CLI call.
7. he Cinder Volume driver gets the response back for connection information and puts it into the Message Queue.
8. The Cinder API process reads the response in the queue and responds to client.
9. Finally, the Cinder client (in this case Horizon) gets the response informing of the status for the creation request, ie: volume UUID.