1. Modular Layer 2 In
OpenStack Neutron
Robert Kukura, Red Hat
Kyle Mestery, Cisco
2. 1. I’ve heard the Open vSwitch and Linuxbridge
Neutron Plugins are being deprecated.
2. I’ve heard ML2 does some cool stuff!
3. I don’t know what ML2 is but want to learn
about it and what it provides.
3. What is Modular Layer 2?
A new Neutron core plugin in Havana
Modular
•
Drivers for layer 2 network types and mechanisms interface with agents, hardware, controllers, ...
o Service plugins and their drivers for layer 3+
o
•
Works with existing L2 agents
openvswitch
o linuxbridge
o hyperv
o
•
Deprecates existing monolithic plugins
openvswitch
o linuxbridge
o
5. Before Modular Layer 2 ...
Neutron Server
Neutron Server
OR
Open vSwitch Plugin
OR ...
Linuxbridge Plugin
6. Before Modular Layer 2 ...
I want to write
a Neutron
Plugin.
What a pain. :(
Neutron Server
But I have to
duplicate a lot of
DB,
segmentation,
etc. work.
Vendor X Plugin
7. ML2 Use Cases
•
Replace existing monolithic plugins
Eliminate redundant code
o Reduce development & maintenance effort
o
•
New features
Top-of-Rack switch control
o Avoid tunnel flooding via L2 population
o Many more to come...
o
•
Heterogeneous deployments
Specialized hypervisor nodes with distinct network
mechanisms
o Integrate *aaS appliances
o Roll new technologies into existing deployments
o
9. The Modular Layer 2 (ML2) Plugin is a
framework allowing OpenStack Neutron to
simultaneously utilize the variety of layer 2
networking technologies found in complex
real-world data centers.
10. What’s Similar?
ML2 is functionally a superset of the monolithic
openvswitch, linuxbridge, and hyperv plugins:
•
•
•
•
Based on NeutronDBPluginV2
Models networks in terms of provider attributes
RPC interface to L2 agents
Extension APIs
11. What’s Different?
ML2 introduces several innovations to achieve
its goals:
•
•
•
•
Cleanly separates management of network types from
the mechanisms for accessing those networks
o Makes types and mechanisms pluggable via drivers
o Allows multiple mechanism drivers to access same
network simultaneously
o Optional features packaged as mechanism drivers
Supports multi-segment networks
Flexible port binding
L3 router extension integrated as a service plugin
12. ML2 Architecture Diagram
Neutron Server
API Extensions
ML2 Plugin
Mechanism Manager
Type Manager
Tail-F NCS
Open
vSwitch
Linuxbridge
L2
Population
Hyper-V
Cisco Nexus
Arista
VXLAN
TypeDriver
VLAN
TypeDriver
GRE
TypeDriver
13. Multi-Segment Networks
VXLAN 123567
physnet1 VLAN 37
VM 1
●
●
●
●
physnet2 VLAN 413
VM 3
VM 2
Created via multi-provider API extension
Segments bridged administratively (for now)
Ports associated with network, not specific segment
Ports bound automatically to segment with connectivity
16. Port Binding
•
•
•
•
Determines values for port’s binding:vif_type and
binding:capabilities attributes and selects
segment
Occurs when binding:host_id set on port or
existing valid binding
ML2 plugin calls bind_port() on registered
MechanismDrivers, in order listed in config, until
one succeeds or all have been tried
Driver determines if it can bind based on:
o
o
context.current[‘binding:host_id’]
o
•
context.network.network_segments
context.host_agents()
For L2 agent drivers, binding requires live L2
agent on port’s host that:
o
o
•
•
Supports the network_type of a segment
of the port’s network
Has a mapping for that segment’s
physical_network if applicable
If it can bind the port, driver calls
context.set_binding() with binding details
If no driver succeeds, port’s binding:vif_type set
to BINDING_FAILED
class PortContext(object):
@abstractproperty
def current(self):
pass
@abstractproperty
def original(self):
pass
@abstractproperty
def network(self):
pass
@abstractproperty
def bound_segment(self):
pass
@abstractmethod
def host_agents(self, agent_type):
pass
@abstractmethod
def set_binding(self, segment_id,
vif_type,
cap_port_filter):
pass
18. Type Drivers in Havana
The following are supported segmentation
types in ML2 for the Havana release:
● local
● flat
● VLAN
● GRE
● VXLAN
19. Mechanism Drivers in Havana
The following ML2 MechanismDrivers exist in
Havana:
●
●
●
●
●
●
●
Arista
Cisco Nexus
Hyper-V Agent
L2 Population
Linuxbridge Agent
Open vSwitch Agent
Tail-f NCS
20. Before
ML2 L2 Population MechanismDriver
“VM A” wants to talk to “VM G.” “VM A” sends a
broadcast packet, which is replicated to the entire
tunnel mesh.
VM A
VM B
Host 1
VM I
VM C
Host 1
Host 2
VM H
Host 4
VM G
Host 3
VM F
VM E
VM D
21. With
ML2 L2 Population MechanismDriver
Traffic from “VM A” to “VM G” is
encapsulated and sent to “Host 4”
according to the bridge forwarding
table entry.
The ARP request from “VM A” for “VM G” is
intercepted and answered using a pre-populated
neighbor entry.
VM A
Host 1
VM B
Proxy Arp
VM I
VM C
Host 2
Host 1
VM H
Host 4
VM G
Host 3
VM F
VM E
VM D
23. ML2 Futures: Deprecation Items
•
The future of the Open vSwitch and
Linuxbridge plugins
These are planned for deprecation in Icehouse
o ML2 supports all their functionality
o ML2 works with the existing OVS and Linuxbrige
agents
o No new features being added in Icehouse to OVS
and Linuxbridge plugins
o
•
Migration Tool being developed
24. Plugin vs. ML2 MechanismDriver?
•
Advantages of writing an ML2 Driver instead
of a new monolithic plugin
Much less code to write (or clone) and maintain
o New neutron features supported as they are added
o Support for heterogeneous deployments
o
•
Vendors integrating new plugins should
consider an ML2 Driver instead
o
Existing plugins may want to migrate to ML2 as well
25. ML2 With Current Agents
●
●
Existing ML2 Plugin
works with existing
agents
Separate agents for
Linuxbridge, Open
vSwitch, and Hyper-V
Neutron Server
ML2
Plugin
API Network
Host A
Linuxbridge
Agent
Host B
Hyper-V
Agent
Host C
Open vSwitch
Agent
Host D
Open vSwitch
Agent
26. ML2 With Modular L2 Agent
●
●
●
Future direction is to
combine Open
Source Agents
Have a single agent
which can support
Linuxbridge and Open
vSwitch
Pluggable drivers for
additional vSwitches,
Infiniband, SR-IOV, ...
Neutron Server
ML2
Plugin
API Network
Host A
Modular
Agent
Host B
Modular
Agent
Host C
Modular
Agent
Host D
Modular
Agent
28. What the Demo Will Show
● ML2 running with multiple MechanismDrivers
○
○
openvswitch
cisco_nexus
● Booting multiple VMs on multiple compute
hosts
● Hosts are running Fedora
● Configuration of VLANs across both virtual
and physical infrastructure
29. ML2 Demo Setup
Host 1
nova api
neutron server
Host 2
VLAN is added on
the VIF for
nova compute VM1
and also on the
...
br-eth2 ports by
the ML2 OVS
neutron ovs agent
MechanismDriver.
neutron dhcp
VLAN is added on
the VIF for VM2
and also on the
br-eth2 ports by
neutron ovs OVS
the ML2
agent
MechanismDriver.
nova compute
neutron l3 agent
vm1
br-int
br-eth2
eth2
vm2
VM1 can ping
VM2 … we’ve
successfully
completed the
standard network
test.
br-int
br-eth2
eth2
The ML2 Cisco ML2 Cisco
The
Nexus
Nexus
MechanismDriver
MechanismDriver
trunks the VLAN the VLAN
trunks
on eth2/1. on eth2/2.
eth2/1
eth2/2
Cisco Nexus Switch