Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Joint OpenStack Kubernetes Environment (OpenStack Summit)

1.238 visualizaciones

Publicado el


Presented at the OpenStack summit, this presentation discusses the practical reality & timing of using Kubernetes as an underlay for OpenStack.

Publicado en: Software
  • Inicia sesión para ver los comentarios

Joint OpenStack Kubernetes Environment (OpenStack Summit)

  1. 1. Will it blend? Joint OpenStack Kubernetes Environment A pragmatic operational assessment of if and when Kubernetes can become an underlay for OpenStack.
  2. 2. TL;DR: Not today. In the future, yes.
  3. 3. Rob Hirschfeld (aka Zehicle online) In Community: OpenStack Board Member (4 years) Co-Chair of Kubernetes Cluster Ops SIG Founder of Digital Rebar & Crowbar Projects Professional: CEO of RackN - hybrid automation software Executive at Dell - scale data center ops Cloud Data Center Ops going back to 1999
  4. 4. Addressing Operators Needs Operational Success is Essential to Project Success Operators are not developers! Simple, Transparent and Stable are key concerns Becoming a super-user of the platform should not be required to run it Scale & Upgradability has both internal and external drivers Generally, Kubernetes has good operational fundamentals
  5. 5. We’re Talking Underlay, not Overlay We’re talking about installing Kubernetes first (aka underlay) and using it to manage the OpenStack control plane. This approach is not a win if we ● Disable Kubernetes management ● Still need outside management tooling For now, we’ll ignore that our user may actually want to use Kubernetes as the overlay. IMHO, a bad assumption. Physical Infrastructure Kubernetes Underlay OpenStack Kubernetes Overlay This Talk Simplest conception of the K8s OpenStack Sandwich
  6. 6. What is Kubernetes? Container Scheduler (no, it’s not really Orchestration) API driven to provide restart, placement, network routing and life-cycle For Applications designed for Kubernetes Key Design Elements: Immutable Infrastructure (stateless ops) 12 Factor Configuration Service Oriented
  7. 7. What is Kubernetes: A Three Tier Application Client 0 Ready 1 Prereq 2 Control 3 Nodes etcd (cluster) etcd (cluster) etcd (cluster) API (cluster) API (cluster) API (cluster) Kubelet KubeCtl Container Manager 5 Apps Network CNI Host Network Host Storage Host Init Pod Pod Pod Pod 4 Add-Ons Certificate Authority Scheduler (leader) Heapster Infrastructure APIs Routers, Storage, LBs... Proxy ... Controller (leader) DNS Watcher ...
  8. 8. Together 4ever: API server + Kubelet Client 0 Ready 1 Prereq 2 Control 3 Nodes etcd (cluster) etcd (cluster) etcd (cluster) API (cluster) API (cluster) API (cluster) Kubelet KubeCtl Container Manager 5 Apps Network CNI Host Network Host Storage Host Init Pod Pod Pod Pod 4 Add-Ons Certificate Authority Scheduler (leader) Heapster Infrastructure APIs Routers, Storage, LBs... Proxy ... Controller (leader) DNS Watcher ...
  9. 9. Kubernetes = Rainbows?!
  10. 10. Why do we want Kubernetes as Underlay? Community Perception Accuracy 1 OpenStack Operations is still really hard True 2 We already do most deploys in containers Partially 3 Kubernetes is awesome at containers Partially 4 Kubernetes means free Upgrades and High Availability False 5 Kubernetes is simple, stable and secure (for operators) False
  11. 11. First: We’re Confusing Technical and Marketing Marketing around Kubernetes under OpenStack is a “hot mess” ● People heard “Kubernetes is stable, OpenStack is not” ● Further confuses “OpenStack one platform message” Who is promoting doing this? Mirantis / CoreOS / Intel / Google Confusion with the Plain Old Container Install (“POCI”) message ● Canonical (Ubuntu Cloud Install), ● Rackspace (OpenStack Ansible) ● Cisco (Kolla)
  12. 12. Second: Why I’m scared This discussion keep kicking the operations & install problems down the field Kubernetes is much newer than OpenStack, so even less understood Yet more complexity and some very basic questions: ● Now we have a both a Kubernetes and OpenStack upgrade problem ● We still need tooling to manage OpenStack in Kubernetes ● We still need someone to package the containers ● Relies on Docker to keep systems running ● Storage and Networking are still being worked out
  13. 13. But, it’s going to happen anyway… So let’s get pragmatic about it.
  14. 14. Key Principle: Containerization vs Kubernetes Containers can be treated as a) lightweight vms or 2) packaged daemon sets. ● Canonical builds their containers like persistent vms and configures with Juju ● Kolla & OSA treats containers as packaging and configures with Ansible Kubernetes accepts neither approach – they expect containers to be immutable and 12 factor configured ● Kubernetes manages the full container life-cycle ● Containers need to be able to handle being added, removed ● Services need to be able to handle IP address changes (or use DNS names)
  15. 15. Specific Technical Barriers Host / Pinned vs Managed Containers ● It is possible to disable Kubernetes management and pin containers ● This eliminates the desired benefit of using Kubernetes How to handle Layered SDN integrations? Who wins? How to handle expectations of container persistence? Assumptions of Exclusive Ownership / Administrative Control
  16. 16. General Challenges to Overcome Complexity ● Overall Complexity of More Components ● Need to Control Kubernetes & Kubernetes Stability ● Need for Multiple Tiers of Load Balancer ● IP Mobility in Service Registration & Message Bus ● Mixed Networking Models ● Utility Upgrades and maintenance of the Underlay ● Mixing Kubernetes workloads Networking Utility
  17. 17. There are REAL Potential Benefits Part of Kubernetes Ecosystem (which is likely bigger than OpenStack’s) Leverage Docker packaging efforts and reduce Python & O/S dependencies Upgrades would benefit from Kubernetes built-in processes Use of the Kubernetes job scheduler for maintenance “Free” fault tolerance of key components Easier install if Kubernetes already running on-site More constrained options for configuration and operation
  18. 18. How could this actually be done quickly? Focus on Control plane, leave the nodes alone. ● Workers are “pinned” and agents need administrative control Cherry pick services to move into Kubernetes ● Focus on web services ● Separate support services (data & message bus) Externalize data using service registry Physical Infrastructure Kubernetes Underlay OpenStack Mgmt OpenStack Nodes SupportServices Make sure OpenStack Projects can handle immutable container requirements
  19. 19. More Detail: Ops Underlay vs OpenStack Underlay Physical Infrastructure Kubernetes Controllers OpenStack Mgmt OpenStack Nodes Database If you to really want to build this, give me a call - RackN has all the components MessageBus Software Defined Networking Kubernetes Workers LoadBalancer
  20. 20. In summary, OpenStack operability is not solved via the underlay platform alone. Technical Leadership motivation required for OpenStack adopting Kubernetes architecture requirements. Serious messaging confusion in effort has to be resolved. However, this collaboration is required for OpenStack Because Kubernetes will have a larger footprint in Operations