Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Introduction to CNI (Container Network Interface)

1.130 visualizaciones

Publicado el

A brief introduction to the CNI (Container Network Interface), the implementation of docker bridge network and the CNI usage, including why we develop the CNI, how to use the CNI and what is CNI.
We also introduction the pause container the kubernetes PoD and how to use the CNI in the kubernetes.
In the end, we use the flannel as an example to show how to install the CNI into your kubernetes cluster

Publicado en: Software
  • Dating for everyone is here: www.bit.ly/2AJerkH
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Sex in your area for one night is there tinyurl.com/hotsexinarea Copy and paste link in your browser to visit a site)
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Girls for sex are waiting for you https://bit.ly/2TQ8UAY
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Meetings for sex in your area are there: https://bit.ly/2TQ8UAY
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Best site for flirting and sex in your area you can find there: https://bit.ly/2SlcOnO
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí

Introduction to CNI (Container Network Interface)

  1. 1. 從入門到放棄,原來 CNI 這麼複雜
  2. 2. WHO AM I ✖ Hung-Wei Chiu (邱宏瑋) ✖ Hwchiu.com ✖ Experience ○ Software Engineer at Linker Networks (now) ○ Software Engineer Synology ○ Co-organizer of SDNDS-TW ○ Co-organizer of CUTNG ✖ Fields ○ Linux Kernel, Network Stack ○ SDN/SDS/Kubernetes
  3. 3. OUTLINE ✖ What is CNI ✖ How Kubernetes use CNI ✖ What is Flannel
  4. 4. What is CNI Container Network Interface
  5. 5. Before that, we need to quick review how Docker setup the network for its container.
  6. 6. How Docker Works ✖ Mount namespaces ✖ IPC namespaces ✖ PID namespaces ✖ Network namespaces ✖ User namespaces ✖ UTS namespaces
  7. 7. We use the default docker network, Bridge Mode.
  8. 8. ✖ Docker run –p 12345:80 nginx ✖ You can access the nginx via localhost:12345 ✖ The nginx container has the network connectivity
  9. 9. Linux Bridge Network ✖ Create a linux bridge ✖ Create a linux network namespace ✖ Create a veth pair ✖ Attach the veth pair into the namespace and linux bridge ✖ Set the ip address ✖ Set the route rules ✖ Set the iptables
  10. 10. br0 br0 br0br0br0 ns1 ns1ns1ns1 vth1 vth0 vth1 vth0 vth1 vth0 Linux Host Linux Host Linux Host Linux HostLinux HostLinux Host
  11. 11. Now, How about the Docker OVS network ?
  12. 12. OVS Network ✖ Create a OVS bridge ✖ Create a linux network namespace ✖ Create a veth pair ✖ Attach the veth pair into the namespace and linux bridge ✖ Set the ip address ✖ Set the route rules ✖ Set the OVS options
  13. 13. As a network plugin developer ✖ How many container system ✖ Docker ✖ Rkt ✖ LXC/LXD ✖ OpenVZ ✖ RunC ✖ ….
  14. 14. Should I Rewrite My Network Plugin for Every Containers?
  15. 15. Container Network Interface
  16. 16. Container Network Interface ✖ Cloud Native Computing Foundation Project ✖ Consists of a specification and libraries. ✖ Configure network interfaces in Linux containers ✖ Concerns itself only with network connectivity of containers ○ Create/Remove
  17. 17. Container Network Interface ✖ Removing allocated resources when the container is deleted ✖ Based on golang
  18. 18. Why develop CNI?
  19. 19. To avoid duplication,
  20. 20. Who is using CNI?
  21. 21. From the GITHUB  rkt - container engine  Kubernetes - a system to simplify container operations  OpenShift - Kubernetes with additional enterprise features  Cloud Foundry - a platform for cloud applications  Apache Mesos - a distributed systems kernel  Amazon ECS - a highly scalable, high performance container management service
  22. 22. So, How to use the CNI?
  23. 23.  CNI_COMMAND=ADD  CNI_CONTAINERID=ns1  CNI_NETNS=/var/run/netns/ns1  CNI_IFNAME=eth10  CNI_STDIN=….
  24. 24.  CNI_COMMAND=ADD  CNI_CONTAINERID=ns1  CNI_NETNS=/var/run/netns/ns1  CNI_IFNAME=eth10  CNI_STDIN=…. Take the linux bridge as an example ✖ Create a linux network namespace ✖ Create a linux bridge ✖ Create a veth pair ✖ Attach the veth pair into the namespace and linux bridge ✖ Set the ip address ✖ Set the route rules ✖ Set the iptables IPAM
  25. 25. Cni_stdnin example
  26. 26. How k8s use cni
  27. 27. How Docker Works ✖ Create a kubernetes cluster ✖ Setup your CNI plugin ✖ Deploy your first POD
  28. 28. Just follow the installation to install the kubernetes
  29. 29. How do we install the CNI?
  30. 30. Hand by hand ✖ In the kubelet, we have the following parameters for CNI. ✖ --cni-bin-dir ○ /opt/cni/bin ✖ --cni-conf-dir ○ /etc/cni/net.d/ ✖ We should config the CNI for every k8s nodes.
  31. 31. Let Deploy a POD
  32. 32. Before we start ✖ Pod ○ A collection of containers
  33. 33. Steps ✖ Load the Pod config ○ Multiple containers ✖ Find a node to deploy the pod ✖ Create a Pause container ✖ Load the CNI config ✖ Execute the CNI ✖ Create target containers and attach to Pause container
  34. 34. Linux Host
  35. 35. Linux Host Pause Container
  36. 36. Linux Host Pause Container Call /opt/cni/bin/CNI /etc/cni/net.d/ xxx.conf
  37. 37. Linux Host Pause Container Network Connectivity
  38. 38. Linux Host Pause Container Network Connectivity contaienr1 contaienr2 contaienr3
  39. 39. Linux Host Pause Container Network Connectivity contaienr1 contaienr2 contaienr3
  40. 40. Linux Host Pause Container Network Connectivity contaienr1 contaienr2 contaienr3 POD
  41. 41. What is flannel
  42. 42. flannel ✖ Famous CNI ✖ Created by CoreOS ✖ Layer3 Network (VXLAN, UDP) ○ VXLAN >>>> UDP ○ Kernel Space >>>> User Space ✖ Easy to setup (one yaml) ✖ Centrally manage by etcd/k8s API. ○ For K8S API, we need to set –pod-cidr for kubelet
  43. 43. What is VXLAN ✖ Virtual eXtensible LAN ✖ Overlay network ✖ Based on UDP
  44. 44. How it works 192.168.78.2 -> 192.168.87.2Layer 2UDP/VXLAN138.197.204.124 -> 138.68.49.202 Original Packet HeaderAdditional Header
  45. 45. ✖ VTEP-1 should know that that 192.1.87/24 should forward to 138.68.49.202 ✖ VTEP-2 should know that that 192.1.78/24 should forward to 138.197.204.124 How it works
  46. 46. How vetps knows that ✖ Multicast ✖ L3 Routing ○ BGP ✖ Unicast (Flannel) ○ Event-Driven ■ Listen netlink event ○ Static Setting ■ Just setting rules with timeout
  47. 47. Steps by steps about falnnel ✖ Install the kubernetes cluster ✖ Apply the Flannel YAML ○ Deploy a CNI config to all k8s nodes.
  48. 48. Before we start ✖ Config Map ○ A global config in whole k8s cluster ✖ DaemonSet ○ A container running on all nodes.
  49. 49. ✖ Create a k8s-config-map ✖ cni-conf -> for CNI ✖ net-conf -> for Flanneld step1
  50. 50. step2 ✖ Create a daemon set
  51. 51. The Problem ✖ Flannel is a CNI ✖ DaemonSet is a POD, needs the IP address ✖ Need the IP address from the Flannel CNI
  52. 52. step2 ✖ Create a daemon set ✖ Two containers
  53. 53. step3 ✖ Daemon load the net- conf.json ✖ Get the IP subnet via etcd/API ✖ Output the file into /run/flannel/subnet.env Subnet.env FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.17.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true
  54. 54. step4 ✖ Fetch the vxlan infor from etcd/API ✖ Create an VXLAN interface ✖ Set the VXLAN routing rules
  55. 55. step5 ✖ When k8s decides to deploy a POD on this node. ✖ Call the flannel CNI ○ Load the config from the /etc/cni/net.d ○ We copy that files in the daemon set. ✖ The flannel CNI ○ Load the /fun/flannel/subnet.env ○ Get one available IP address and assign to the PoD
  56. 56. Summary Node 1 Node 2 Node 3 Network 172.16.2.2 172.16.2.3 172.16.2.4 Kubernetes Cluster
  57. 57. Summary Node 1 Node 2 Node 3 Network 172.16.2.2 172.16.2.3 172.16.2.4 Kubernetes Cluster ConfigMap: Cni.conf Net.con
  58. 58. Summary Node 1 Node 2 Node 3 Network 172.16.2.2 172.16.2.3 172.16.2.4 Kubernetes Cluster ConfigMap: Cni.conf Net.con FlannelD 172.16.2.2 FlannelD 172.16.2.3 FlannelD 172.16.2.4 Copy the cni.conf to /etc/cni/net.d Generate the /run/flannel/sub.env Copy the cni.conf to /etc/cni/net.d Generate the /run/flannel/sub.env Copy the cni.conf to /etc/cni/net.d Generate the /run/flannel/sub.env
  59. 59. Summary Node 1 Node 2 Node 3 Network 172.16.2.2 172.16.2.3 172.16.2.4 Kubernetes Cluster ConfigMap: Cni.conf Net.con FlannelD 172.16.2.2 FlannelD 172.16.2.3 FlannelD 172.16.2.4 FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.18.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.19.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true FLANNEL_NETWORK=10.1.0.0/16 FLANNEL_SUBNET=10.1.17.1/24 FLANNEL_MTU=1472 FLANNEL_IPMASQ=true Assume the network_cidr is 10.1.0.0/16
  60. 60. Summary Node 1 Node 2 Node 3 Network 172.16.2.2 172.16.2.3 172.16.2.4 Kubernetes Cluster ConfigMap: Cni.conf Net.con FlannelD 172.16.2.2 FlannelD 172.16.2.3 FlannelD 172.16.2.4 Deploy three busybox Busybox Busybox Busybox 10.1.17.5 10.1.18.6 10.1.19.7
  61. 61. Summary ✖ CNI ○ Avoid Duplicatin ✖ K8S ○ Pause Container ○ Other use –net=` pause containerID` to attach ✖ Flannel ○ VXLAN ○ DaemonSet + ConfigMap
  62. 62. Thanks! Any questions?

×