Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

Using Camunda on Kubernetes through Operators

Cargando en…3

Eche un vistazo a continuación

1 de 23 Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a Using Camunda on Kubernetes through Operators (20)


Más de camunda services GmbH (20)

Más reciente (20)


Using Camunda on Kubernetes through Operators

  1. 1. 2021 Summit
  2. 2. Introduction Surush Samani • CloudNative enthousiast • Workflow automation • Various industries • Multinationals, Startups • NBA addict My experience: “Technology is easy, People are hard. Invest in people and in culture. Culture is everything” @CloudNativeL
  3. 3. What is Cloud Native? A set of approaches / patterns to “build and run scalable applications in modern, dynamic environments such as public, private and hybrid clouds. A set of supporting technologies & OSS tools / frameworks are used to achieve that. This includes MicroServices, Containers, Serverless, Immutable Infrastructure (IaC) and more. The goal is to achieve resilient, manageable and observable loosely coupled systems that are managed by dedicated DevOps teams.
  4. 4. Cloud Infrastructure Core Components Microservices Containers Automation Modern Design Backing Services Defining Cloud Native | Microsoft Docs
  5. 5. Cloud Infrastructure • Leveraging the full advantages of cloud service models • Traditional data centers vs cloud platform (Pets vs Cattle)
  6. 6. Microservices Monolith Database Service Layer Web / Frontend Layer A C B D B A C A B Database Service Layer A A Database Service Layer B B Database Service Layer C C Microservices • Isolate business capabilities through microservices. • Single owner of data store • Communication through standard protocols , http(s), websockets, amqp • A set of microservices together resemble one application for the end user. • Full autonomy of lifecycle • Independant scaling, availability, integrity based on the importance of the service.
  7. 7. Containers Are Containers Replacing Virtual Machines? - Docker Blog • Consolidating the application, its dependencies, runtime all packaged in one container image. • More control on the application, security concerns and isolation there of. • Underlying Operating system remains “clean” and conflicting dependencies will not occur and creates smaller footprint • Both support for Windows and Linux. This to support legacy applications only being able to run on Windows. • Orchestrating containers through orchestration tools (k8s, openshift, swarm, rancher, AKS, EKS, GKE Scheduling Failover Scaling Health Monitor (Anti) Affinity Networking Service Discovery Rolling Upgrades
  8. 8. Automation • Embracing DevOps, culture and mindset • Multidisciplinary teams • Automating deployment of infrastructure to keep desired state, immutable infrastructure deployment • Testing the deployment • Versioning the deployment.
  9. 9. Backing Services • Monitoring • Storage • Databases (SQL, Postgress) • Events (Messaging, Servicebus) • Azure DevOps (CI/CD)
  10. 10. Modern Design 12+ Factor Application Originating from the original book written in 2011 (Adam Wiggins) and a new perspective in 2016 by Kevin Hoffman looking beyond the original 12 factors. Code Base Dependencies Configurations Backing Services Build, Release & Run Processes Port Binding Concurrency Disposability Dev/Prod Parity Logging Admin Processes API First Telemetry Authentication / Authorization
  11. 11. Cloud Native Flavours
  12. 12. Kubernetes
  13. 13. Kubernetes components Compute Storage Network Configuration RBAC
  14. 14. What is a Controller? • A non terminating control loop to check the state of the system (in this case the kubernetes cluster). • Tracking at least one resource types. • Kubernetes has internal controllers like the deployment controller and Job Controller Direct Control API Server Control
  15. 15. Operator Pattern Watch (Observe) Act (Reconcile) Analyse
  16. 16. Operator Custom Definition Custom Resource Definition (CRD) => Definition to watch Custom Resource (CR) => Instance
  17. 17. Upgrades Phase I Phase II Phase III Phase IV Phase V Insights Installation Lifecycle Auto-pilot Automatic provisioning and configuration management Minor upgrades including patches Application lifecycle, storage lifecycle. ( backup, failure & recovery ) Visibility through metrics, alerts, log processing and workload analysis Horizontal / vertical scaling. Auto configuration tuning, scheduling tuning , detecting abnormalities Operator phases
  18. 18. Upgrades Phase I Phase II Phase III Phase IV Phase V Insights Installation Lifecycle Auto-pilot Operator flavours
  19. 19. Helm operator flow Kubernetes Api Server Chart Repository Retrieve charts Call Api to deploy charts Helm Client (running in CI/CD pipeline or on workstation) Client needs to run on cluster- admin role of cluster
  20. 20. Ansible Operator Flow Detailed • - Push architecture • - Modules (playbooks and roles) • - Inventory (list of machines)
  21. 21. Go with Go? • Bare metal: Kubernetes API Client SDK (multiple languages) • Supported (community) libraries / frameworks • KubeBuilder (Go) • Kubernetes Universal Declarative Operator (YAML) • Metacontroller (any programming language that supports webhook and can handle json) => Lambda hooks • Operator Framework • Operator SDK (Go) • Kopf (Python) • Java Operator SDK (Java)

Notas del editor

  • Pets in traditional datacenters means that you treat it like your pet. You give it a name, when something happens to it everyone notices and you take care of it till it gets better.
    Cattle on the other side is all about immutable infrastructure. Servers arent repaired. If one fails it is destroyed and another one is provisioned through automation.

    Cloud native embraces this cattle model. Scale up/down and in/out as you see fit. Concepts like self healing, monitoring, scaling.
  • EKS (Amazon Elastic Kubernetes Service)
    AKS (Azure Kubernetes Service)
    GKE (Google Kubernetes Engine)

    - Scheduling: provision instances in sequence
    - Affinity: group containers nearby or far apart to increase performance or availability
    - Health monitor: detect failures
    - Failover: Reprovisioning failed instances to healthy
    - Scaling: adding/removing instances
    - Service Discovery: Internal lookup
    - Networking : Internal network overlay
    - Rolling upgrades: Incremental upgrades with no downtime, auto rollback
  • Code Base : A single code base for each microservice, stored in its own repository. Tracked with version control, it can deploy to multiple environments (QA, Staging, Production).
    Dependencies: Each microservice isolates and packages its own dependencies, embracing changes without impacting the entire system.
    Configurations: Configuration information is moved out of the microservice and externalized through a configuration management tool outside of the code. The same deployment can propagate across environments with the correct configuration applied.
    Backing Services: Ancillary resources (data stores, caches, message brokers) should be exposed via an addressable URL. Doing so decouples the resource from the application, enabling it to be interchangeable.
    Build, Release, Run: Each release must enforce a strict separation across the build, release, and run stages. Each should be tagged with a unique ID and support the ability to roll back. Modern CI/CD systems help fulfill this principle.
    Processes: Each microservice should execute in its own process, isolated from other running services. Externalize required state to a backing service such as a distributed cache or data store.
    Port Binding: Each microservice should be self-contained with its interfaces and functionality exposed on its own port. Doing so provides isolation from other microservices.
    Concurrency: Services scale out across a large number of small identical processes (copies) as opposed to scaling-up a single large instance on the most powerful machine available.
    Disposability: Service instances should be disposable, favoring fast startups to increase scalability opportunities and graceful shutdowns to leave the system in a correct state. Docker containers along with an orchestrator inherently satisfy this requirement.
    Dev/Prod Parity: Keep environments across the application lifecycle as similar as possible, avoiding costly shortcuts. Here, the adoption of containers can greatly contribute by promoting the same execution environment.
    Logging: Treat logs generated by microservices as event streams. Process them with an event aggregator and propagate the data to data-mining/log management tools like Azure Monitor or Splunk and eventually long-term archival.
    Admin Processes: Run administrative/management tasks as one-off processes. Tasks can include data cleanup and pulling analytics for a report. Tools executing these tasks should be invoked from the production environment, but separately from the application.
    API First: Make everything a service. Assume your code will be consumed by a front-end client, gateway, or another service.
    Telemetry: On a workstation, you have deep visibility into your application and its behavior. In the cloud, you don't. Make sure your design includes the collection of monitoring, domain-specific, and health/system data.
    Authentication/ Authorization: Implement identity from the start. Consider RBAC features available in public clouds.
  • Kubelet: primary node agent that connects to control plane
    Kube-Proxy: A network proxy that does TCP/UDP/SCTP (stream) forwarding
    Controller Manager: A Daemon embedding a control loop (checking state, keeping watch) (node controller, checking on nodes, job controller, endpoints controller, service accounts & token controller, IAM api access).
    Cloud controller manager: connect control plane to cloud provider apis
    Etcd: keyvalue stored
    Scheduler: assigns pods to nodes, checking constraints.
  • Deployment: Deployment enables declarative updates for Pods and ReplicaSets.
    StatefulSet: StatefulSet represents a set of pods with consistent identities. Identities are defined as: network, storage. DaemonSet: DaemonSet represents the configuration of a daemon set. (one node per cluster, meant to do something on the cluster on node level)
    Pod: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.

    PersistentVolume: is a storage resource provisioned by an administrator
    PersistentVolumeClaim: PersistentVolumeClaim is a user's request for and claim to a persistent volume.
    StorageClass: StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.

    Ingress: Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.
    Service: Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.

    ConfigMap: ConfigMap holds configuration data for pods to consume.
    Secret: Secret holds secret data of a certain type.

    ServiceAccount: binds together: a name, a principal that can be authenticated and authorized * a set of secrets.
    User: Human user of Kubernetes cluster.

    Group: Set of Service Accounts or Users.
    Role: Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.

    ClusterRole: ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
    ClusterRoleBinding: A cluster role binding grants the permissions defined in a role/clusterrole to a user or set of users. Permissions are granted cluster-wide.
    RoleBinding: A role binding grants the permissions defined in a role/clusterrole to a user or set of users. Permissions are granted within a namespace.

  • Example 1: thermostat at home.
    Internal examples: The deployment controller and the Job Controller

    Control via API Server: (Job controller). Schedules job and components within the control plane prepare the work so that thekubelet (node agent) can pick up the work.

    Direct Control: (autoscaler in cloud). Scaling the nodes in the cluster.
  • Watch: a pub/sub event type pattern where you subscribe to your event type (CRD) and the CR Instance. The controller receives the events from the Control Plane.
    Analyse: The controller compares the desired state and current state based on the defined attributes usually in the CR
    Act: The controller does all application specific knowledgeable actions that a person would do under these circumstances to reach the desired state.

    TESLA: Autopilot.