Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

{code} and Containers - Open Source Infrastructure within Dell Technologies

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio

Eche un vistazo a continuación

1 de 61 Anuncio

{code} and Containers - Open Source Infrastructure within Dell Technologies

Descargar para leer sin conexión

Learn how The {code} Team is building new infrastructure possibilities for persistent storage in all the major container ecosystems such as Kubernetes, Docker, and Mesos with native integrations and contributing the Container Storage Interface

Learn how The {code} Team is building new infrastructure possibilities for persistent storage in all the major container ecosystems such as Kubernetes, Docker, and Mesos with native integrations and contributing the Container Storage Interface

Anuncio
Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a {code} and Containers - Open Source Infrastructure within Dell Technologies (20)

Anuncio

Más reciente (20)

{code} and Containers - Open Source Infrastructure within Dell Technologies

  1. 1. {code} and containers Open Source Software, Dell EMC’s Contributions, and Highly Available Applications December 2017
  2. 2. WHY IS OSS IMPORTANT?
  3. 3. Intangible Benefits of Open Source Not reinventing the wheel Customization with benefits Motivated workforce Attracting top talent Standardized practices Business acceleration Cleaner software Cheaper Customer goodwill Community support Innovation Flexibility Freedom Integration
  4. 4. True Benefits of Open Source Innovation Flexibility Freedom Integration
  5. 5. FIFI and Integration • Freedom – to run software for any purpose and to redistribute it • Innovation – the ability to leverage a collaborative community of developers • Flexibility – to deploy the software in a manner that best meets the organization’s requirements • Integration – Ability to easily integrate open source software with existing infrastructure It’s not about cost, it’s about integration
  6. 6. Open Source at Dell Technologies – Contributing to and maintaining open source projects – Community engagement – Technical solution leadership {code} is a team of passionate open source engineers and advocates building community through contribution and engagement in emerging technologies. Platinum Sponsor
  7. 7. thecodeteam.com
  8. 8. Digital Business is Driving Software Innovation Microservices APIs Open source Containers Cloud Native Frameworks Analytics Insights Drive New Functionality, Which Drives New Data Applications/IoT Transforms Business Data Generated By New Applications
  9. 9. Open source Containers Digital Business is Driving Software Innovation Microservices APIs Cloud Native Frameworks Analytics Insights Drive New Functionality, Which Drives New Data Applications/IoT Transforms Business Data Generated By New Applications
  10. 10. Keys to Freedom and Flexibility • Embrace DevOps and open source • Remove unnecessary complexity • Run all applications from containers • Operate everything as software, including storage • Run applications as services • Use platform that orchestrates and consumes all infrastructure natively for applications
  11. 11. Applications are Changing Loosely Coupled Services Many Servers ~2000 Today Monolithic Big Servers Slow Changing Rapidly updated
  12. 12. Containers are Key • Lightweight • Becoming defacto application packaging standard • Package of software binaries and dependencies • Easily portable across environments (on-prem and cloud) • Allows ecosystem to develop around its standard • Docker, Mesos, and Kubernetes are currently the most popular container technologies • A persistent container stores data in an external volume • Additionally an opportunity to run old applications in new ways Code and Dependencies Container
  13. 13. Optimizing and Enabling Rapid Innovation Virtual machines Server Public Cloud Disaster Recovery Developer Laptop Server Cluster Data Center Generic Persistence Web Front EndBackground Workers SQL Database NoSQL Scale-Out Database Queue API Endpoint Development Test & QA Production Scale Out
  14. 14. Virtual Machines and Containers Platform 2/Mode 1 Platform 3/Mode 2
  15. 15. Challenges of deploying container platforms Visibility & Ease of Use Elasticity Data Storage & Protection Platform Sprawl & Siloed Organizations Financial Viability & Existing Investments Many Applications, Different Needs Business Challenges Infrastructure Challenges
  16. 16. THERE IS NO SUCH THING AS A ”STATELESS” ARCHITECTURE IT’S JUST SOMEONE ELSE’S PROBLEM - Jonas Boner, CEO of Lightbend
  17. 17. Do you have any of these in your data center? • Databases – Postgres, MongoDB, MySQL, MariaDB, Redis, Cassandra • Search, Analytics, Messaging – ElasticSearch, LogStash, Kafka, RabbitMQ • Content Management – Wordpress, Joomla, Drupal, SugarCRM • Service Discovery – Consul, Zookeeper, etcd • Continuous Integration and Delivery – Jenkins, GitLab, SonarQube, Selenium, Nexus • Custom Applications – That Java app your company built Stateful and persistent applications
  18. 18. Applications need data Lots of different types of persistent services to consider Files Blocks Documents Logstreams Time Series Media and Streaming Modern or Traditional Applications Storage Services Objects Your use case here…
  19. 19. hub.docker.com/explore 7 of the top 15 require persistence 12/18/17
  20. 20. What's the problem? • When I run a persistent application in a container, where does my data get stored? – The container holds the data directory and structure of the entire application – Optionally use local volumes • Stateless applications work well – nginx, httpd, kibana, haproxy, memcached, solr, celery $ docker run -v redisData:/data redis redisData /etc /var /bin /opt /data
  21. 21. What's the problem? • Lose a container – Lose the data • Lose a server – Lose the data • Local data storage – Failed hard drives or failed RAID – Can not scale beyond the physical limit of the server /etc /var /bin /opt /data
  22. 22. Storage is easy when abstracted by a hypervisor Everything is software defined with virtualized infrastructure Physical Servers Hypervisor/IaaS You get software defined storage from any storage platformCompute Network Storage Shared storage (FC/iSCSINFS) Provided through hyper-converged storage Data-plane abstraction makes it easy to connect storage to VMs. OR/AND VM Abstracted by a hypervisor for VMs With many heterogeneous servers and storage Maybe this is the right answer for you. DATA FLOW
  23. 23. Cloud Native Thinking Applied.. Be portable, focus on software and interoperability instead of data-plane abstraction Container Orchestrators Container OSs Storage resources are interoperable Cloud Storage Service Integration with Storage Orchestrators Cloud Native Storage Compute Network Storage DATA FLOW
  24. 24. Interoperability for Storage Services • Orchestrators communicate with external storage platforms to perform storage lifecycle and orchestration features. • These integrations take place internal and external to container runtimes and orchestrators and their code base. Container Orchestrator Cloud and Storage Platform Container Hosts Cloud and Storage Platform Storage Plugins In-Tree Out-of-Tree Managed Plugins or Host Process
  25. 25. Deploying Applications with Storage • Orchestrator ensures application and container are running at all times. • A new container will be created with the existing data if necessary. • The application process and container maintain ephemeral. • Storage is orchestrated to the host and made available to container. Storage is attached and detached from host instances where containers are targeted to run. deploy: container: redis volume: name: redisData /redisData /etc /var /bin /opt /data
  26. 26. Interfacing for Storage Services Container orchestrators and runtimes are able to make specific requests for storage services. Container Orchestrator Cloud and Storage Platform Container Hosts Interface achieves these things.. • Create/Remove volumes • Inspect/List volumes • Attach/Detach volumes • Mount/Unmount volumes
  27. 27. Storage Plugins Interoperability Today Docker Volume Driver Interface DVDI DVDCLI Flex Interface DVDI JSON over RPC JSON over Proc JSON/RPC over HTTP Storage Platform JSON/RPC over HTTP In-Tree CSI gRPC
  28. 28. Introducing REX-Ray REX-Ray The leading container storage orchestration engine enabling persistence for cloud native workloads rexray.codedellemc.com • Out of Tree Plugin
  29. 29. Cloud Native Interoperability rexray.codedellemc.com DOCKER APACHE MESOSKUBERNETES Use the stand alone Docker Engine to run a stateful application or combine it with Docker Swarm Mode to turn your application into a robust service Use any framework that orchestrates containers such as Marathon or Aurora to provide persistent storage for stateful services. Provision stateful applications in pods through the CSI interface or FlexREX, and benefit from a broad set of storage platforms with CLI management capabilities. CONTAINER STORAGE INTERFACE Use any container orchestrator that implements CSI to allow predictable interoperability with supported storage providers.
  30. 30. REX-Ray Features rexray.codedellemc.com PERSISTENT STORAGE ORCHESTRATION FOR CONTAINERS ENTERPRISE READY OPEN SOURCE TRUSTED INTEROPERABILITY • Run any application using multiple storage platforms. • Resume state and save data beyond the lifecycle of a container. • Containers aren’t just for stateless applications anymore. • High-availability features for container restarts across host • CLI intuitiveness • Contributions from the cloud native community and the leading storage vendor in the world. • A completely open and community driven project • Constantly innovating and providing new integration points. • Community contributed drivers, features, and additional functionality • Compatible with the Container Storage Interface (CSI) spec and implements all the volume lifecycle and orchestration aspects.
  31. 31. REX-Ray Features rexray.codedellemc.com • A single interface with common volume lifecycle operations for all of your storage platforms. • In-tree and Out-of-tree drivers support is the fastest path for getting a storage platform to be compliant with CSI and all major COs • Includes support for storage platforms types that cover block, file, and object. • Run any application that has any type of storage requirement • Installed as a single binary or deployed as a container. • Can be configured to include one or multiple storage platforms from a single stateless service. • Multiple architectural choices allow flexibility for deployments. • Configure as standalone to serve in a decentralized architecture. • Leverage the client/agent and controller for a centralized architecture MULTIPLE STORAGE PLATFORM SUPPORT STORAGE AGNOSTIC EFFORTLESS DEPLOYMENT STANDALONE OR CENTRALIZED
  32. 32. REX-Ray Features rexray.codedellemc.com • All traffic is encrypted with TLS using pre- configured keys and certificates or auto- generated self-signed certificates. • A fingerprint feature using Token Based Authentication prompts an Agent to trust a Controller. SECURE BY DEFAULT
  33. 33. Storage Platform Integration rexray.codedellemc.com • Elastic Block Storage (EBS) • Elastic File Storage (EFS) • Simple Storage Service (S3) • Use thin- provisioned EBS volumes as an alternative to reduce costs in AWS. • ScaleIO (block) • Isilon (NFS) • ECS (object) • Persistent Disks (PD) • Cloud Storage Buckets (CSB) • Digital Ocean Block Storage attaches persistent disks your droplets. • Blob Storage Unmanaged Disks (block) • S3 compatible object storage • Local Disk • RADOS Block Devices (RBD) • RADOS Gateway (RGW) • Mount a S3 bucket as a directory using Fuse • Cinder volumes on any hardware (virtualized or bare-metal) can be used. • All vSphere supported storage vendors including VMware vSAN
  34. 34. Development to Production Lifecycle Deploying containers using REX-Ray stays consistent from local development, testing in the cloud, to production in on-premise datacenters ScaleIO Isilon VMware all supported storage platforms are not shown*
  35. 35. Who’s Using REX-Ray? …and many more
  36. 36. Kubernetes Integration • Kubernetes offers two approaches for storage integration. • An “in-tree” volume plugin for a platform. The storage interface code is directly embedded into Kubernetes. The downside is that plugin velocity (the speed at which a plugin can be added, enhanced, or patched) is gated by the Kubernetes release cycle. No more storage vendors are allowed to be added. • The second approach is to leverage the Container Storage Interface. This interface will allow developers to focusing on building a single driver that can interoperate across all Container Orchestrators. It satisfies the complete volume lifecycle while functioning as an out-of- tree driver. This is the future of storage integrations.
  37. 37. Kubernetes Integration • ScaleIO is part of the core Kubernetes code and a first class native storage provider • ScaleIO can take full advantage of the Kubernetes volume lifecycle features including dynamic provisioning and storage classes • ScaleIO driver is embedded in the standard distribution of Kubernetes • Contributed code from the {code} team passes “Google” standard of quality • Opens a new opportunity for those running Kubernetes in on-premise data centers. It allows utilization of your commodity x86 server hardware for very high performance and highly available storage for running stateful apps in containers. • Native in-tree driver (REX-Ray is not needed)
  38. 38. OpenShift Integration • Kubernetes is used under the covers of OpenShift • Get all the benefits ScaleIO without any additional configuration (batteries included) • Support persistence requirements for applications with a trusted open source platform • REX-Ray is not needed
  39. 39. Container Storage Interface Universal Storage Interface for Container Orchestrators • CSI can leverage unique storage services from any storage provider, cloud or otherwise. Easy interoperability between storage and container orchestrators and true portability of containers between infrastructures. • The {code} team lead this effort, in collaboration with the community, as key stakeholders in special interest group meetings to define the CSI specification and were tasked by Kubernetes maintainers for implementing CSI in Kubernetes 1.9. • Kubernetes will eventually deprecate native in-tree drivers in favor of CSI https://github.com/container-storage-interface
  40. 40. Kubernetes Integration • REX-Ray can function as a CSI driver that allows all supported storage platforms to be consumed • Allowspodsto consumedatastoredonvolumes thatare orchestratedby REX-Ray.Using CSI,REX-Ray can provideuniform accessto storage operationsfor any configuredstorageprovider. • Runstatefulapplicationsinpodsand benefitfrom CLImanagementcapabilities. • UseanyREX-Raysupportedstorageplatform • REX-Raycan import andpackage upCSIdriversfor more storageplatformsupport • Contains aDocker bridge that allows CSIdrivers to beusedby Docker Swarm https://rexray.readthedocs.io/en/stable/user-guide/servers/csi/ REX-Ray
  41. 41. CSI Drivers For Support of On-Premise Platforms • Created by {code} • Native out-of-tree drivers (REX-Ray not needed VIRTUALIZED BARE METAL
  42. 42. Docker Integration • Use the stand alone Docker Engine to run a stateful application or combine it with Docker Swarm Mode to turn your application into a robust service. • Available in the Docker Store as a certified and trusted plugin • REX-Ray development started when the Docker Volume Driver interface was introduced in 1.7 Experimental • One of 6 available drivers during DockerCon 2014 debut of the Docker Volume Driver • {code} team has submitted 6 patches and over 100 lines of code to the Docker Engine for Storage • Provides full volume lifecycle capability
  43. 43. Docker Integration • Deploy as a host based service or as a Docker Plugin Available in the Docker Store
  44. 44. $ docker service create --name redis --replicas 1 --mount type=volume,src=redisData, dst=/data,volume-driver=rexray redis redisData /etc /var /bin /opt /data Storage Platform Persistent Volume Docker Swarm Integration
  45. 45. Mesos Integration • Docker Integration • UseDocker asthe underlyingcontainertechnology • SpecifyREX-Ray astheVolume Driver foranypersistentstorageusecase REX-Ray rexray.codedellemc.com
  46. 46. Mesos Integration • Docker Volume Driver Isolator Module • Noremote storagecapabilityusingMesosbeforeSeptember2015 • {code}was firstto bring an abstractedpersistentstoragemodel • Freedomto usethe MesosContainerizerwith ANY dockervolume driver (thatmeansyoudon’tneedDocker) • Mergedmesos-module-dvdi upstream into Mesos 1.0 to providestorageplatformsupportforanyvendorwho has writtena dockervolume driver, includingREX-Ray • Every companythat boastsstorageintegrationwith Mesosusesthe{code} contributedmodule.
  47. 47. DC/OS Integration • DC/OS is a distributed operating system based on the Apache Mesos distributed systems kernel. • DC/OS 1.7 ReleaseshipswithREX-Rayembedded • DC/OS 1.8 ReleaseshipswithDVDI embeddedforMesos 1.0 • DC/OS 1.10 ReleaseshipswithREX-Rayv0.9 andDVDCLI 0.2 via Community(#1430) • Featuredin the documentation- https://dcos.io/docs/1.8/usage/storage/external-storage/
  48. 48. End-to-End Container Persistence Leadership for Mesos What does it do Enables external storage to be created / mounted / unmounted with each agent task Abstracts Docker Volume Drivers for Mesos Storage orchestration engine Cool factor • First Mesos Agent external storage module • Merged into Mesos 1.0 (2016) • Allows the use of any Docker volume driver with Mesos! • Vendor agnostic • DC/OS 1.7+ framework (2016) • Community contributed upgrade in 1.10+ mesos-module-dvdi Docker Volume Driver Client CLI DVDCLI REX-Ray
  49. 49. Mesos Container Framework Integration • Marathon from Mesosphere • Apache Aurora (originally used by Twitter) • ElasticSeach
  50. 50. ScaleIO Framework for Mesos • A Software-based Storage Framework using ScaleIO to turn commidy hardware and direct attached storage (DAS) into a globally accessible and scalable storage platform that can be deployed anywhere • Framework installs and configures ScaleIO on all Mesos Agent (compute) nodes. As more Agents are added to the cluster, ScaleIO is installed and adds storage resources to the cluster. • Deploy and Configure ScaleIO without knowing operational details • Centralized Monitoring of the storage platform • Persistent storage native to the container scheduling platform using REX-Ray • https://github.com/codedellemc/scaleio-framework
  51. 51. Storage Plugins Interoperability Today Docker Volume Driver Interface DVDI DVDCLI DVDI JSON over Proc JSON/RPC over HTTP Storage Platform JSON/RPC over HTTP Flex Interface JSON over RPC In-Tree CSI gRPC
  52. 52. Solving the problem • REX-Ray or Kubernetes is installed and configured on all hosts in the cluster as a stateless service • Container engines redirect volume operations to the storage driver – Create/Mount/Unmount/Delete $ docker run --volume-driver=rexray -v redisData:/data redis /redisData /etc /var /bin /opt /data
  53. 53. Solving the problem • Lose the container or lose the server – Data persists and remains in tact on the remote storage platform /etc /var /bin /opt /redisData
  54. 54. Solving the problem • Attach the volume to a new container on a different host – Equivalent of a hard reset. Application starts and resumes from the last write to disk – Container scheduler is responsible for creating a new container • Scalability – Application data can scale to the maximum supported by the storage platform /etc /var /bin /opt /data /redisData
  55. 55. Dell EMC ScaleIO Software that creates an enterprise class virtual SAN on top of commodity server hardware OS, Hypervisor, and Media Agnostic • Utilizes commodity hardware • “Pay as you grow”— linear predictable costs • No dedicated storage components (FC network, HBAs) • Add servers of any type or configuration to the pool • Retire servers on your own schedule • Eliminate migration • Scale performance linearly
  56. 56. Scale Out Storage for Scale Out Apps
  57. 57. #CodeOpen Demo #CodeOpen Demo
  58. 58. Take our projects for a spin at the {code} Labs http://github.com/thecodeteam/labs
  59. 59. github.com/thecodeteam thecodeteam.com/community @thecodeteam blog.thecodeteam.com {code} is a team of passionate open source engineers and advocates building community through contribution and engagement in emerging technologies. rexray.thecodeteam.com github.com/thecodeteam/labs vLab (For Partners) Use REX-Ray & ScaleIO w/ Docker, Mesos and Kubernetes

Notas del editor

  • Add talking points to each point
    OSS projects: Docker, Mesos, Kubernetes, Cloud Foundry
  • Switch out applications and data
  • Switch out applications and data
  • In this example, our application is storing data in “/mydata”
  • In this example, our application is storing data in “/mydata”
  • But lets take virtualzation as an example. The abstraction of every facet of compting/network/storage makes it very easy to consume. All of the heavy lifting is done by the hypervisor because the types and amounts of servers, storage and networking aren’t visible to the container orchestrator, engine or runtime . And maybe this is the right answer for you, and to be honest, the immediate future of containers will be using a hypervisor. This creates a pseudo-software defined storage platform behind the container orchestrator’s back. But we need to think a bit differnenly as this progresses because the data-plane abstraction is relying on a virtulized mount to provide the data persistence.
  • If we take this to a cloud native concept, the data flow should be horizontal. The compute, networking, and storage is decoupled and only fits through a series of universal interfaces. These interfaces are responsible for the orchestration of networking and storage on different types of clouds and on-premise products. This lateral data flow allows an application to be truly portable across any type of environment and even spread across environments.
  • The API requests for storage orchestration to external platforms can come from two possibilities. First is an in-tree driver which is native code built-in to the CO. the second is out-of-tree drivers using a plugin. Each one of these has its own pros and cons list. The in-tree driver is subject to the release cycle of the CO. The storage plugins for out-of-tree might not be given all the benefits of an in-tree if the interfaces the are exposed can’t utilize the feature sets of the storage platform.
  • So how does this work? In this example we are looking at a redis deployment. The orchestrator is responsible for to make sure this application adheres to the specifics or remediation policy. A new container is created and storage is orchestrated to the host and made available to the container. If data currently exists on the volume, then it’s mounted and state resumes. If not, a clean volume is mounted and any init or kickstart scripts can begin. At this point, the application and container remain ephemeral. Meaning that I can destroy the container and I’m expecting the CO to restart a new container. However, no data will be lost since it rests on a external storage platform and the use of the storage orchestration will take care of volume lifecycle.
  • The volume lifecycle achieves a few different things. It can… this is all possible through various and specific API requests to the storage platforms
  • REX-Ray is one of the only solutions on the market today that provide support for multiple storage platforms. REX-Ray allows consistent deployments of applications from inception in development using VirtualBox to testing in the cloud. Many organizations have realized that their testing and QA environment is 2-3x the size of production infrastructure. Utilizing automation to spin up cloud resources on demand for testing and QA has drastically reduced the amount of costs incurred. After code release has passed all testing and ready to move to production, REX-Ray can support Dell EMC storage platforms in your on-premise datacenter.
  • Deprecated
  • Deprecated
  • Deprecated
  • In this example, our application is storing data in “/mydata”
  • Docker Volume Driver Isolator Module
    No remote storage capability using Mesos before September 2015
    {code} was first to bring an abstracted persistent storage model
    Freedom to use the Mesos Containerizer with ANY docker volume driver (that means you don’t need Docker)
    Merged upstream into Mesos 1.0 to provide storage platform support for any vendor who has written a docker volume driver, including REX-Ray
    Every company that boasts storage integration with Mesos uses the {code} contributed module.

    DVDCLI
    Sss
    Sss

    DC/OS 1.7 Release ships with REX-Ray embedded
    DC/OS 1.8 Release ships with DVDI embedded for Mesos 1.0
    Featured in the documentation - https://dcos.io/docs/1.8/usage/storage/external-storage/



  • Explain the history of rexray and libstorage in mesos and highlight how Dell EMC took a industry best practice to develop these solutions through
    Open source
    Community building
    MVP, etc
    etc
  • Dell EMC ScaleIO is the premier choice for building a container as a service (CaaS) platform. Containerized workloads are unpredictable and there is no magic spreadsheet formula that will help sizing. (click) The true test of new applications aren’t realized until the service starts experiences lots of traffic. (click) Unpredictability is where ScaleIO plays a critical role since it has the ability to scale-out with the application. (click) Server sprawl can now keep up with the pace of container sprawl. (click)(click)
  • First, a few things about the team that has made this possible.

    The Dell EMC {code} team is a team made up of open source software engineers and developer advocates, focused on making EMC a well-known name within the open source community.

    We will focus on one of their projects, REX-Ray, in this presentation.

×