This is the latest Update to my OpenStack Networking / Neutron 101 Slides with some more Information and caveats on the new DVR and Gateway HA Features
2. Agenda
• Network Models
• Nova-Networking vs. Neutron refresher
– Nova-Networking quick overview
– Nova-Networking Multi-Host mode
– Nova-Networking vs. Neutron at a glance
• Neutron plugin concept refresher
• Service plugins
• ML2 plugin vs. monolithic Plugins
• New in Juno
– Distributed Virtual Router for OVS mechanism driver
– Neutron L3 High-Availability for virtual routers
– Neutron IPv6 Support
3. OpenStack Networking – Flat
• In the simple ‘flat’ networking model, all instances (VMs) are bridged to a physical adapter
• L3 first-hop routing is either provided by the physical networking devices (flat model), or by
OpenStack L3 Service (flat-DHCP model)
• Sufficient in single tenant or ‘full trust’ use cases were no segmentation is needed
(beside iptables/ebtables between VM interfaces and bridge)
• Doesn’t provide multi-tenancy, L2 isolation and overlapping IP address support
• Available in Neutron and in Nova-Networking
VM VM VM VM
VM VM VM VM
VM VM VM VM
VM VM VM VM
L3
L2
L3
L2
Access port (no VLAN tag)
4. OpenStack Networking – VLAN based
• The VLAN based model uses VLANs per tenant network (with Neutron) to provide
multi-tenancy, L2 isolation and support for overlapping IP address spaces
• The VLANs can either be pre-configured manually on the physical switches, or a neutron
vendor plugin can communicate with the physical switches to provision the VLAN
• Examples of vendor plugins that are creating VLANs on Switches are the Arista and Cisco
Nexus/UCS ML2 mechanism driver
• L3 first-hop routing can be done either;
• On the physical switches/routers, or
VM VM VM VM
VM VM VM VM
VM VM VM VM
L3
L2
L3
L2
VLAN trunk port
(VLAN tags used)
VM VM VM VM
Neutron vendor plugin can create
VLANs through vendor API
5. OpenStack Networking – VLAN based
VM VM VM VM
VM VM VM VM
VM VM VM VM
L3
L2
L3
L2
VLAN trunk port
(VLAN tags used)
Logical routers are handling the first-
hop gateway function on Neutron
Network-Node
• The VLAN based model uses VLANs per tenant network (with Neutron) to provide
multi-tenancy, L2 isolation and support for overlapping IP address spaces
• The VLANs can either be pre-configured manually on the physical switches, or a neutron
vendor plugin can communicate with the physical switches to provision the VLAN
• Examples of vendor plugins that are creating VLANs on Switches are the Arista and Cisco
Nexus/UCS ML2 mechanism driver
• L3 first-hop routing can be done either;
• On the physical switches/routers, or
• As logical routers in
Neutron
Neutron vendor plugin can create
VLANs through vendor API
L3 for tenant
networks
6. VM VM VM VM
OpenStack Networking Models – ‘SDN Fabric’ based
• In this model multi-tenancy is achieved using different ‘edge’ and ‘fabric’ tags.
E.g. VLANs can be used to address the tenant between the hypervisor vSwitch and the Top-of-Rack switch,
and some other tag is used inside of the vendors fabric to isolate the tenants
VM VM VM VM VM VM VM VM
Vendor Fabric uses some
form of ‘Fabric Tag’ to
address the tenant
VM VM VM VM VM VM VM VM VM VM VM VM
Hypervisor to Top of Rack
Switch uses some form of
‘edge tag’
(e.g. VLAN, VXLAN header,
etc.)
Central controller controls the
vSwitches and physical
Switches
Controller
• Usually a single controller controls both the vSwitches and the
physical switches
• L3 first-hop routing and L2 bridging to physical
usually done in the physical switch fabric
• Single vendor design for physical and virtual networking
• Examples; BigSwitch, NEC, Cisco ACI, Brocade, Avaya, …
Neutron vendor plugin
talks to controller
through vendor API
Fabric Tag
Edge Tag Edge Tag
7. OpenStack Networking Models – Network Virtualization
• With network virtualization (aka overlay) model, multi-tenancy is achieved by overlaying
MAC-in-IP ‘tunnels’ onto the physical switching fabric (aka transport network)
• An ID field is used in the encapsulation header (e.g. VXLAN, GRE, STT) to address the tenant network. A
full L2 isolation and overlapping IP space support is achieved
• Controller controls only the vSwitches and the Gateways
• L3 first-hop routing and L2 bridging to physical done either by software or
hardware gateways (or both)
• Examples; VMware NSX, Midokura, Plumgrid, Contrail, Nuage, …
OpenStack DACH Day 2014 @ Linux Tag Berlin, 0
VM VM VM VMVM VM VM VM VM VM VM VM
VM VM VM VM VM VM VM VM
Physical network fabric
uses L3 routing protocols
(e.g. OSPF or BGP) to
build a stable Layer 3
Fabric
SDN controller
cluster controls the
vSwitches in the
Hypervisors
MAC-in-IP ‘Tunnel’ is
used to address and
isolate the tenants
(e.g. using VXLAN)
L3
Gateway
L3
L2
L3
L2
L3L3
L3
L2
Neutron plugin
talks to
controller
through vendor
API
8. Why I think the ‘Network virtualization’
(aka overlay) approach is the best model
• It achieves multi-tenancy, L2 isolation and overlapping IP address support without the need to re-
configure physical network devices
• Logical Network for Instances (VMs) is location independent – It spans over L2/L3 boundaries,
and therefore doesn’t force bad (flat) network design
• Very big ID space for tenant addressing compared to the usual VLAN id space
(max. 4094)
• Network virtualization runs as a software construct on top of any physical network topology,
vendor, etc.
• Physical network and logical network can evolve independently from each other, each one can
be procured, exchanged, upgraded and serviced independently
• Large number of commercial and open source implementations are available today
• Proven in production in some of the largest OpenStack deployments out there
9. Nova-Networking quick Overview
nova-api
(OS,EC2,Admin)
nova-console
(vnc/vmrc)
nova-compute
Nova
DB
nova-scheduler
nova-
consoleauth
Hypervisor
(KVM, Xen,
etc.)
Queue
nova-cert
Libvirt, XenAPI, etc.
nova-metadata
nova-
network
nova-volume
Network-Providers
(Linux-Bridge or OVS with
brcompat, dnsmasq, IPTables)
Volume-Provider
(iSCSI, LVM, etc.)
Inspired by Ken Pepple
• Nova-Networking was OpenStack’s first network
implementation
• Nova-network is still present today, and can be
used instead of Neutron
• No new features are added since Folsom, but bug-
fixing is done frequently
• Nova-network only knows 3 basic Network-Models;
– Flat & Flat DHCP: direct bridging of Instance to
external ethernet Interface with and without DHCP
– VLAN based: Every tenant gets a VLAN, DHCP
enabled
• Watch this online meetup
Session for more details: https://www.youtube.com/watch?v=ascEICz_WUY
10. Nova-Networking Multi-Host mode 1/2
nova-compute
hypervisor
VM VM
Bridge 30IP Stack
Compute Node
+ Networking
nova-compute
hypervisor
VM VM
Br
30IP Stack
Compute Node
nova-compute
hypervisor
VM VM
IP Stack
Compute Node
External
Network
(or VLAN)
Internal
VLANs
WAN/
Internet
dnsmasq
iptables/
routing
Bridge 40
VLAN30 VLAN40
Br
40
VLAN30 VLAN40
Br
30
Br
40
VLAN30 VLAN40
VLAN Trunk VLAN Trunk
dnsmasq
NAT &
floating
-IPs
nova-netw.
• In Nova-Networking the node that is holding the nova-networking role is;
– A single point of failure
– A choke point for both east-west and north-south traffic
(traffic staying in the DC between nodes and traffic leaving/entering the DC at the perimeter)
– Nova-Networking has a “multi-host mode” to address this
11. Nova-Networking Multi-Host mode 2/2
nova-compute
hypervisor
VM VM
Bridge 30IP Stack
Compute Node
+ Networking
External
Network
(or VLAN)
Internal
VLANs
WAN/
Internet
dnsmasq
iptables/
routing
Bridge 40
VLAN30 VLAN40
VLAN Trunk VLAN Trunk
dnsmasq
NAT &
floating
-IPs
nova-netw.
• With nova-networking “Multi-Host” each compute node runs nova-networking, and provides
routing, SNAT and floating-ip’s (DNAT) for its local Instances
– Pros; Inherently highly-available; scales out routing and NAT to all compute-nodes
– Cons; IP address sprawl: each compute-node needs one external IP for SNAT, and one internal IP
in each project Network
nova-compute
hypervisor
VM VM
Bridge 30IP Stack
Compute Node
+ Networking
dnsmasq
iptables/
routing
Bridge 40
VLAN30 VLAN40
dnsmasq
NAT &
floating
-IPs
nova-netw. nova-compute
hypervisor
VM VM
Bridge 30IP Stack
Compute Node
+ Networking
dnsmasq
iptables/
routing
Bridge 40
VLAN30 VLAN40
dnsmasq
NAT &
floating
-IPs
nova-netw.
External network
12. Nova-Networking vs. Neutron at a glance
• Watch this online meetup
Session for more details: https://www.youtube.com/watch?v=ascEICz_WUY
• Neutron pros
– More network implementation options
– Dynamic network, virtual router, load
balancer, VPN creation under the tenants
control instead of fixed per project
allocation
– Pluggable architecture allows vendors to
integrate their network solution into
OpenStack and innovate independently
(e.g. using network virtualization, SDN
concepts, etc.)
– Well defined tenant accessible API for
consuming network services
• Nova-Networking pros
– Simple models with less moving parts
– “Compute centric” networking model;
easier to understand than the complex
options and “networking speech” in Neutron
– Code-Base is in “bug-fixing” mode since
long time now; less friction
– HA and scale-out trough “multi-host” option
(starting to be addressed in Neutron by
DVR and HA)
13. OpenStack Neutron – Plugin Concept refresher
Neutron
Core API"
Neutron Service (Server)"
"
• L2 network abstraction definition and management, IP address
management
• Device and service attachment framework
• Does NOT do any actual implementation of abstraction
"
Plugin API"
"
Vendor/User Plugin"
• Maps abstraction to implementation on the Network (Overlay e.g. NSX or physical Network)
• Makes all decisions about *how* a network is to be implemented
• Can provide additional features through API extensions.
• Extensions can either be generic (e.g. L3 Router / NAT), or Vendor Specific
"
Neutron
API Extension"
Extension API
implementation is optional
16. OpenStack Neutron – Modular Plugin
• Before the modular plugin (ML2), every team or vendor had to implement a complete plugin
including IPAM, DB Access, etc.
• The ML2 Plugin separates core functions like IPAM, virtual network id management, etc. from
vendor/implementation specific functions, and therefore makes it easier for vendors not to
reinvent to wheel with regards to ID Management, DB access …
• Existing and future non-modular plugins are called “monolithic” plugins
• ML2 calls the management of network types “type drivers”, and the implementation specific part
“mechanism drivers”
Arista
CiscoLinux Bridge
OVS etc.
Mechanism
Drivers"
GRE
VLAN
VXLAN
etc.
Type
Drivers"
Type Manager" Mechanism Manager "
ML2 Plugin & API Extensions"
18. OpenStack Neutron – Modular Plugin vs. Monolithic Plugins
• A vendor is free to choose between the development of an monolithic plugin or an ML2
mechanism driver
– A vendor might want use its own integrated IPAM / DB access, or already has a stable and proven
code base for it
– Timing: Development of a monolithic plugin might have started long before ML2 emerged
• Contrary to a common misunderstanding monolithic plugins are not deprecated, only the existing
OVS-Plugin and Linux Bridge plugins have been deprecated in IceHouse in favor of the OVS /
Linux Bridge mechanism drivers
• ML2 re-uses the monolithic OVS and Linux Bridge code for its mechanism driver and agents
(e.g L3 Agent, DHCP Agent, OVS Agent, etc.)
19. Juno – Distributed Virtual Router for OVS – 1/5
• There is was equivalent of nova-network “multi-host” mode in Neutron before Juno
• In the OVS and Linux Bridge implementations, the L3 Agent node is a single point of failure.
• Scaling out is done by deploying multiple network nodes, but even then east-west traffic needs to
go through the L3 Agent Node, and can potentially be a choke point
• Some vendor implementation already have distributed routing an HA today in their solutions
IP Stack
Neutron-
Network-Node
nova-compute
hypervisor
VM VM
IP Stack
Compute Node
nova-compute
hypervisor
VM VM
Compute Node
External
Network
(or VLAN)
WAN/
Internet
iptables/
routing
Layer 3 Transport Network
dnsmasqNAT &
floating
-IPs
iptables/
routing
N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent
ovsdb/
ovsvsd
Neutron-Server + OVS-Plugin
N.-OVS-Agent N.-OVS-Agent
ovsdb/
ovsvsd
ovsdb/
ovsvsd
Layer 3 Transport Net.
IP Stack
br-int br-int
br-tun
br-int
br-tun
br-tun
L2 in L3
Tunnel
dnsmasq
br-ex
20. Juno – Distributed Virtual Router for OVS – 2/5
• Similar to “multi-host” mode in nova-network, each compute node now has its own routing and
NAT service (internal router namespaces - ‘IR’ )
• In contrast to nova-network “multi-host” mode :
– SNAT will be done on a centralized network-node to avoid IP address sprawl on the external network
(introducing a single point of failure that needs to be addressed through virtual routers HA later)
– All IRs use a single logical internal IP in the tenant networks, but have separate MAC addresses
IP Stack
Neutron-
Network-Node
nova-compute
hypervisor
VM VM
Compute Node
External
Network
(or VLAN)
WAN/
Internet
iptables/
routing
Layer 3 Transport Network
dnsmasqSNAT
-IPs iptables/
routing
N.-L3-Agent N.-DHCP-Agent N.-OVS-Agent
ovsdb/
ovsvsd
Neutron-Server + OVS-Plugin
N.-OVS-Agent
IP Stack
br-intbr-int
br-tun br-tun
L2 in L3
Tunnel
dnsmasq
br-ex
N.-L3-(DVR)-Agent
iptables/
routing
NAT for
floating
-IPs
iptables/
routing
br-ex
ovsdb/
ovsvsd
nova-compute
hypervisor
VM VM
Compute Node
N.-OVS-Agent
IP Stack
br-int
br-tun
iptables/
routing
NAT for
floating
-IPs
iptables/
routing
br-ex
ovsdb/
ovsvsd
Layer 3 Transport Net.
External
Network
(or VLAN)
External
Network
(or VLAN)
N.-L3-(DVR)-Agent
21. br-int
br-int
Juno – Distributed Virtual Router for OVS – 3/5
• For east-west traffic which is routed within a tenants distributed virtual router,
traffic is send directly between compute-nodes on the transport network
(Using the overlay technology)
• Traffic can also stay within a compute-node, if the source and destination are
on the same compute node
• For more details see the DRV blueprint:
https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
east-west
north-south
ComputeNode
VM
VM
VM
VM
IR2
IR1
WAN/
Internet
ComputeNode
External Network
Transport Network
(e.g. used for tunnels)
NetworkNode
IR2
IR1
VM
VM
VM
VM
br-tun br-tun
br-tun
br-ex br-ex br-ex
br-int
R2 /
SNAT
R1 /
SNAT
22. br-int
Juno – Distributed Virtual Router for OVS – 4/5
• For SNAT from the tenant instances to the internet/WAN (north/south) traffic is
routed through a centralized network-node
• This avoids IP address sprawl on the external network
• For more details see the DRV blueprint:
https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
east-west
north-south
ComputeNode
VM
VM
VM
VM
IR2
IR1
WAN/
Internet
ComputeNode
External Network
Transport Network
(e.g. used for tunnels)
NetworkNode
R2 /
SNAT
R1 /
SNAT
IR2
IR1
VM
VM
VM
VM
SNAT
Router
-IP
br-tun
br-tun br-tun
br-ex br-ex br-ex
br-int
br-int
23. br-int
Juno – Distributed Virtual Router for OVS – 5/5
• For floating-ip’s to and from the tenant instances to the internet/WAN (north/
south) traffic is routed and nat’ed directly at the compute nodes
(IR Namespace)
• For more details see the DRV blueprint:
https://blueprints.launchpad.net/neutron/+spec/neutron-ovs-dvr
east-west
north-south
ComputeNode
VM
VM
VM
VM
IR2
IR1
WAN/
Internet
ComputeNode
External Network
Transport Network
(e.g. used for tunnels)
NetworkNode
R2 /
SNAT
R1 /
SNAT
IR2
IR1
VM
VM
VM
VM
floating
-IP
br-tun
br-tun br-tun
br-ex br-ex br-ex
br-int
br-int
24. Juno – Current caveats for Distributed Virtual Router
• Currently there is no support for HA in the centralized SNAT Node (north/south). Although there
is L3 Agent HA in Juno today, you need to choose between DVR mode or L3 HA today. The
plan is to address this in Kilo, or even later, as the Neutron team has other technical debt to
work on
• No IPv6 Support
• DVR is only supported with OVS Plugin with VXLAN based overlays, no support for VLAN
modes and/or for Linux Bridge Plugin
• No support for VPNaaS
• Longer term plans
– Distributed SNAT
– Distributed DHCP (nova-network has this today)
– Full migration support from virtual routers to DVR
25. Juno – HA for Virtual Routers
• Juno added native HA support using ‘keepalived’ for the centralized L3 agent nodes
• If configured for HA, one active and one standby router will be deployed on two different
neutron L3 GW network nodes. Both will share Virtual IPs internally
• For more details see the HA for virtual routers blueprint:
https://github.com/openstack/neutron-specs/blob/master/specs/juno/l3-high-availability.rst
+----+ +----+!
| | | |!
+-------+ QG +------+ +-------+ QG +------+!
| | | | | | | |!
| +-+--+ | | +-+--+ |!
| VIPs| | | |VIPs |!
| | +--+-+ +--+-+ | |!
| + | | | | + |!
| KEEPALIVED+---+ HA +------+ HA +----+KEEPALIVED |!
| + | | | | + |!
| | +--+-+ +--+-+ | |!
| VIPs| | | |VIPs |!
| +-+--+ | | +-+--+ |!
| | | | | | | |!
+-------+ QR +------+ +-------+ QR +------+!
| | | |!
+----+ +----+!
26. Juno – Current caveats for L3 Agent HA
• Currently there is no state synch for NAT tables and FWaaS states, planed to be address in Kilo
or later using Conntrackd
• No support for HA when using the DVR functionality (also goes with the first bullet)
• No logging for state transitions, no CLI to see where the active router is and no CLI to move it
between nodes
• Currently no automatic migration of existing routers to HA routers
• Max. 255 router pairs per HA network, and therefore per tenant
27. Juno – IPv6 support
• IPv6 was dysfunctional at multiple implementation points in Neutron before Jun0
– No support for Stateless Auto Configuration (SLAAC) in OpenStack security model / IPAM, so
even when one uses an external IPv6 router, security groups and port security will prevent the
Instance from working correctly
– Dnsmasq support for DHCPv6 was problematic and “broken”
– No IPv6 Routing support on L3 Agent, Metadata, etc.
• A new IPv6 Neutron Subteam was founded to address the multiple IPv6 requirements
• Expected critical IPv6 Features in Juno Timeframe
– Provider Networking - upstream SLAAC Support
– Support DHCPv6 stateless and stateful mode in Dnsmasq
– Support Router Advertisement Daemon (radvd) for IPv6
• See more details here: https://wiki.openstack.org/wiki/Neutron/IPv6
28. Juno – More Information
• A big number of new vendor plugins, enhancements to existing plugins and mechanism drivers,
service plugins etc. are being developed for the Juno timeframe right now
• See here for a list of Juno Specs (linking to the Blueprints):
https://github.com/openstack/neutron-specs/tree/master/specs/juno
• See here for a list of Blueprints: https://blueprints.launchpad.net/neutron/juno