Publicidad
Publicidad

Más contenido relacionado

Similar a OpenEBS hangout #4(20)

Publicidad
Publicidad

OpenEBS hangout #4

  1. OpenEBS Hangout #4 22nd December 2017
  2. Introducing MayaOnline 5-10 minutes Recap OpenEBS 5 minutes Release updates: 5-10 minutes ○ OpenEBS ○ Kubernetes contributions ○ What is coming in OpenEBS 0.6 ? cMotion overview & Demo 20 Agenda
  3. MayaOnline Introduction
  4. Maya: Cross-cloud control plane ○ Visibility, automation, collaboration, and, over time, learning via machine learning ○ OpenEBS users can subscribe to a free version and then are upsold to a subscription that includes OpenEBS enterprise support OpenEBS: Containerized Storage for Containers ○ Open source software that allows each workload - and DevOps team - to have their own storage controller
  5. API MAYAOnline.io ✓ Visibility ✓ ChatOps ✓ Optimization ChatOps MAYA GUI
  6. API MAYAOnline.io ✓ Visibility ✓ ChatOps ✓ Optimization ChatOps MAYA GUI
  7. OpenEBS Quick Recap
  8. OpenEBS recap- Why, What, How? ● Containerized storage for containers ● Storage solution for stateful applications running with k8s ● One storage controller per application/team vs monolithic storage controller ● Integrates nicely into k8s (provisioning)
  9. Architecture: Kubernetes K8s Master Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet etcd APIs Cntrl Schld Minions run on physical nodes PODs group containers, share an IP address, and each include a Kubelet agent K8S Master services include: etcd, APIs, the scheduler, the control manager & others
  10. Converged: Kubernetes + OpenEBS K8s Master Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet Minion POD Container Container ContainerKubelet POD Container Container ContainerKubelet POD Container Container ContainerKubelet etcd APIs Cntrl Schld OpenEBSAPIs Schld Data Containers run in PODs on physical machines - an entire enterprise class storage controller Data Containers mean every workload - and every per app team - has their own controller OpenEBS runs on the Master; delivers services such as: APIs, the storage scheduler, analytics & others
  11. How to get started ? On your kubemaster: kubemaster~: kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml kubemaster~: kubectl apply -f percona.yaml In the application yaml, choose the OpenEBS storage class, setup the policies, and launch the application kubemaster~: kubectl get pods | grep pvc pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-ctrl-696530238-ngrgj 2/2 Running 0 36s pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-2ldzv 1/1 Running 0 36s pvc-8a9fc4b1-d838-11e7-9caa-42010a8000a7-rep-3408218758-6mwj5 1/1 Running 0 36s OpenEBS provisions storage controller with requested number of replicas, volume is bound and ready
  12. Kubernetes Cluster Stateless Ingress Service (App) Stateful (DB, etc) ov-vol1 (target)(cstor) node1 node2 ov-vol1-r1 (replica) (cstor) ov-vol1-r2 (replica)(cstor) OpenEBS Volume (Deployment, Service, PV - Disk ) Storage Backend (Disks) Application (Deployment, Service, PVC, PV - OpenEBS Volume) Stateful Apps using OpenEBS Volumes
  13. OpenEBS release updates
  14. Release updates: OpenEBS 0.5: ● Prometheus ● Grafana ● Volume exporter side-car per storage controller ● New storage policies ○ # of replicas ○ monitoring = on/off ○ storage pool (AWS EBS, GPD, local LVM etc)
  15. K8s contributions from OpenEBS: In progress
  16. What is coming in OpenEBS 0.6 ?● OpenEBS provisioner will support more storage spec from k8s ○ PV Resource Policy (Quota, #PVCs, etc., ) ○ Volume Resize ○ Volume Snapshots ○ Block Volume Claims ● Disk Monitoring and Alerts ● Refactor Storage Policy specification as CRDs ● Support OpenEBS Upgrades via kubectl ● Enhance the debuggability ● Enhance CI with platform testing with OpenShift/CentOS, CoreOS, Rancher
  17. cStor & cMotion (Tech preview)
  18. cStore can store up to 2^128 bits can achieve millions of IOPS with usec latency what cStor is not ● A distributed file system typically needed for capacity and performance scaling; you can't have one without the other ○ hard to manage in production (do not want a storage team ● Volumes are typically small, GBs certainly not PBs ○ no need to scale capacity using complex distributed algorithms ● What about performance? ○ NMVe devices widely available in the cloud ○ single NMVe can do up to 400K iops; 3d Xpoint on its way ● Cloud native application have built in scalability ○ no need to scale a monolith storage system by adding more drives to “raid groups”
  19. Reimagined how storage should for cloud native apps on prem and in the cloud What is cStor ● New storage engine that brings enterprise class features in containers for containers ○ snapshot, clones, compression, replication, data integrity, ….. ● Key enabler for cMotion (demo) ○ the ability to move data efficiently and incrementally c2c ● Always consistent on disk (transactions) ● Data integrity and encryption, crucial for the clouds deployments ● Online expansion of existing volumes (resize) ● Cloud native design vs cloud washed ○ build from the ground up vs an existing solution with container lipstick
  20. Copy on Write Under the hood of cStor controller cStor node1 cStor node2 cStor node3 iSCSI, iSER, NMVEoF, NFS(?) ● Controller serves out the blocks to the application ○ defined in YAML, deployed by maya and k8s ● Based on replication level, the controller forwards the IO to the replicas (cStor) ● cStor is a transactional; which is always consistent on disk. ● Copy on Write (CoW) data never gets overwritten but written to unused block Also a container!
  21. Atomic updates, data always consistent on disk -- cStor itself is stateless Transactions ● Each write is assigned a transaction ● Transactions are batched in to transactions groups (optimal bandwidth) ● Latest transaction number points to the “live” data ● Transaction numbers are updated atomically which means that all writes in the group have succeeded or failed ● A snapshot is a reference to an old transaction (and its data) ○ quick scan of newly written blocks since last transaction ● cMotion; send the blocks that have changed between two transactions ● All form nice and comfortable user space ○ no kernel dependencies (needed for c2c) ○ no kernel taints
  22. Hardware trends enforce a change in the way we do things Storage performance challenges ● How to achieve high performance numbers from within user space? ○ copyin/copyout of data between kernel and user is expensive ○ context switches ● With current HW trends, the kernel becomes the bottleneck ○ white label boxes 1U, serving out 17mil IOPS (!!!) ● Low latency SSDs and 100GB network cards become the norm ● 10GB nic ○ 14.888e6 64 bytes per second; CPU has only a couple of cycles per packet per NIC ● Frequency remains relatively the same, core count goes up ○ we’ve got cores to spare
  23. HW performance kernel vs user Its raining IOPS Source : https://software.intel.com/en-us/articles/accelerating-your-nvme-drives-with-spdk
  24. Achieve higher performance in user space ● Solution; bypass the kernel subsystem all together, running everything in user space ● Instead of doing network or disk IO to the kernel, we submit the IO to another container which is direct access to the HW resources (IOC) ○ map NIC rings to user space ○ map PCI bars from NMVe devices ○ lockless and message passing between cores ● Poll Mode Drivers (PMD) ○ 100% CPU ● Borrow from VM technology to construct interfaces between containers ○ VHOST and VIRTIO-SCSI
  25. Summary ● cStor provides enterprise class like features like your friendly neighbourhood <insert vendor> storage system ● Provides data integrity features missing natively in the Linux Kernel ● Provides the ability to efficiently work with data by use of COW ● Bypasses kernel for IO to achieve higher performance than kernel ● Cloud native design; using cloud native paradigms to develop and deploy ● Removes friction between developers and storage admins QUESTIONS?
  26. cMotion Demo setup overview
  27. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github
  28. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to Denmark k8s cluster and show the github CICD working Jenkins Pod
  29. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to GCP Denmark k8s cluster and show the github CICD working Jenkins Pod 3 Part 3: Move Jenkins pod to AWS k8s cluster and show the github CICD working
  30. MayaOnline cMotion Demo setup overview Jenkins Pod GCP Zone: US-Central K8s Cluster: austin-cicd User’s CI/CD GCP Zone: Europe East K8s Cluster: Denmark- cicd AWS Zone: US East K8s Cluster: mule-master 1 Part 1: Show the CICD setup working with Jenkins and github 2 Part 2: Move Jenkins pod to GCP Denmark k8s cluster and show the github CICD working Jenkins Pod 3 Part 3: Move Jenkins pod to AWS k8s cluster and show the github CICD working 4 Part 4: Move Jenkins pod to GCP austin cluster and show the github CICD working
  31. AMA: Ask me anything Q & A
  32. Container Attached Storage = DAS++ DAS Benefits: Simple Ties application to storage Predictable for capacity planning App deals with resiliency Can be faster Concerns: Under-utilized hardware ○ 10% or less utilization Wastes data center Difficult to manage Lacks storage features Cannot be repurposed - made for one workload Does not support mobility of workloads via containers Cross cloud impossible OpenEBS = “CAS” ✓ Simple ✓ No new skills required ✓ Per microservice storage policy ✓ Data protection & snapshots ✓ Reduces cloud vendor lock-in ✓ Eliminates storage vendor lock-in ✓ Highest possible efficiency ✓ Large & growing OSS community ✓ Natively cross cloud ✓ Uses proven code - ZFS & Linux ✓ Maya -> ML based analytics & tuning “YASS”: Distributed Benefits: Centralized management Greater density and efficiency Storage features such as: ○ Data protection ○ Snapshots for versioning Concerns: Additional complexity Enormous blast radius Expensive Requires storage engineering Challenged by container dynamism No per microservice storage policy I/O blender impairs performance Locks customers into vendor Cross cloud impossible CAS DAS Distributed

Notas del editor

  1. Ask questions - what good is a plan to you? What are you hoping to get out of this session?
  2. Hyperconverged
  3. Hyperconverged with CO Smaller Blast radius with micro-services-like Storage Controller Architecture Seamless management interface - similar to Kurbernetes (kubernetes itself) Extends the capabilities of CO with Storage Management Benefits of Locally Attached Storage and High Availability provided via the Synchronous Replication Easy to Migrate across Nodes, Cluster, Infra ( No Cloud Vendor Lock-in)
  4. So first, this may seem odd, but I want to explain what cstor is not, but more importantly why not.
  5. Storage veterans, so features that we believe are needed not everything on the list is done yet, but certainly in the pipes
  6. Summarize into three for us
Publicidad