Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

OpenStack sdn

2.247 visualizaciones

Publicado el

Little explanation about Neutron.

Publicado en: Software

OpenStack sdn

  1. 1. OpenStack SDN With Neutron and GRE By: Adrián Norte
  2. 2. What is SDN? SDN(Software Defined Networking) is an abstracted approach to networking that allows to create, manage and delete complex networks programmatically. Usually the data plane is managed via OpenFlow(using OpenvSwitch on Unix systems) and the control plane is managed with Neutron on OpenStack.
  3. 3. Neutron NaaS provider Neutron is a NaaS(Networking as a Service) provider first known as Quantum on OpenStack. It provides an API that allows the admin to manipulate easily the SDN system using several plugins but the one most used is ml2 with OpenvSwitch.
  4. 4. GRE(Generic Routing Encapsulation) GRE is used to communicate via tunneling each compute node with the neutron nodes, those tunnels are used for the VMs(Virtual Machines) traffic. OpenStack supports tunneling with vlan and vxlan too. Why tunneling? With a tunnel you can encapsulate the VM traffic inside the packets and transfer it to the node with the info for the SDN on the other side to deliver it, it’s like abstracting one network from the other to have many logic networks on one single traditional network. GRE or any other tunneling technology causes a little overhead so the 1500MTU doesn’t work and the VM needs to use 1450 for the MTU.
  5. 5. Where are the routers and dhcp servers? OpenvSwitch uses network namespaces to create virtual routers and dhcp servers on separate networks inside the same node without collisions. And for the DHCP servers it uses dnsmasq to create FQDNs and the IP leases. [root@neutron ~]# ip netns qdhcp-36e20040-22da-4c57-a08d-0a96ffd53cb1 qrouter-39224929-27d1-4343-bd9f-5b62177a6702
  6. 6. What are network namespaces? It is one of the usages of the cgroup technology on the Linux kernel since version 2.6.24. It allows to limit, account and isolate a resource(this is on what Docker is based) so you can have several networks that cannot se each other, or users, or processes and any other resource. To list the network namespaces you can use: ip netns list
  7. 7. Neutron node and Compute node
  8. 8. Explanation A packet comes from the internet to our VM: 1. It arrives to the br-ex interface that is a OVSBridge bounded to a physical network card. 2. Is passed into the router assigned to the network assigned to the VM. 3. It goes to the br-int OVSBridge that tags it with the GRE tunnel ID for this network. 4. It passes to the br-tun OVSBridge(bounded to another physical NIC) via a patch port and it sends the packet through the tunnel to the compute node. 5. The compute node br-tun receives the package and hand it to the br-int. 6. The br-int checks the tagging and based on that hands the packet to one of the Linux Bridges attached to itself. 7. The Linux Bridge hand it to the TAP interface attached to the VM and is processed by the VM.
  9. 9. 1. It arrives to the br-ex. What happens is that we have defined a br-ex bridge with OpenvSwitch(is different that one from Linux bridges) that have a port to the interface connected to the exterior. Imagine that every bridge is a switch and a port is just that, a port on that switch and when you define one you are connecting a cable to it. So, when you define that port what you are saying is that everything that comes to the NIC should be handed over to the br-ex bridge. What the following OpenFlow rule say is that the br-ex should act as a normal L2 switch. [root@neutron ~]# ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): cookie=0x0, duration=2246829.205s, table=0, n_packets=2698390003, n_bytes=856806232811, idle_age=0, hard_age=65534, priority=0 actions=NORMAL
  10. 10. 2. Is passed into the router assigned to the network assigned to the VM. • When you assign a floating IP to a VM it creates a port on the br-ex bridge assigned to a interface that is located on a virtual router(created using namespaces), in that router there also exists an interface that is the gateway for the vms on that network.
  11. 11. 3. It goes to the br-int OVSBridge that tags it with the GRE tunnel ID for this network. • When the packet is handed down to the br-int through the floating IP interface to the router its already into the br-int so it is marked with the GRE tunnel ID for that network.
  12. 12. 4. It passes to the br-tun OVSBridge • The br-tun is another bridge that have a patch interface(a patch interface is like connecting a cable between 2 switches) to the br-int that allows communication between the different openstack nova nodes and neutron nodes. • It have a port for every node with the GRE type. Bridge br-tun Port "gre-0a000004" Interface "gre-0a000004" type: gre options: {in_key=flow, local_ip="", out_key=flow, remote_ip=""}
  13. 13. 5 and 6.The compute node br-tun receives the package and hand it to the br-int. • When is received the br-int checks the ID and hand it over to the port attached to respective Linux bridge that have the TAP interface for the VM. • It Uses Linux bridges because OVSBridges and ports cannot be used on Iptables rules and the security groups of OpenStack use Iptables.
  14. 14. 7.The Linux Bridge hand it to the TAP interface attached to the VM. • This is the end of the travel, basically when the packet reach the TAP interface is received by the hypervisor that then copy the packet into the VM memory space and is processed by the VM.
  15. 15. Improving performance. • To have an acceptable connection speed with the vms you need to diable offloading. • ethtool -K <interface> gro off tso off gso off • Why disable offloading? The offloading is a mechanism that leaves to the physical NIC some preprocessing of the packets, it works fine withouth virtualization but this preprocessing removes some headers and this hurts the SDN.