SlideShare una empresa de Scribd logo
1 de 17
Descargar para leer sin conexión
Quantum
   Modular L2
Plugin and Agent

     Bob Kukura
        Red Hat
Grizzly Design Summit
      10/15/2012
History
●
    Prior to Folsom, Quantum assumed “green-field” cloud data center
    –   Single networking technology
    –   Uniform connectivity
    –   No access to existing networks
    –   Exception: Cisco plugin's sub-plugins for UCS, Nexus
●   Red Hat Quantum/oVirt meeting March 2012
    –   Discussed Quantum “gaps” preventing use by oVirt for enterprise virtualization
    –   Many apply to private OpenStack clouds
●   Presented “Quantum in the Data Center” at Folsom design summit
    –   Existing networks
    –   Heterogeneous technology
    –   Non-uniform connectivity
    –   Deployability issues
●
    Folsom features
    –   Provider networks
    –   Metaplugin
    –   RPC support
Terminology
●   Network – An abstract Quantum isolated L2 network whose ports can be attached to
    VMs, agents, etc.
●   Tenant Network – A “normal” Quantum network created by a tenant.
●   Provider Network – A Quantum network administratively created to map to a specific
    existing network in the data center.
●   Network Type – The specification by which a Quantum network segment is realized “on
    the wire” (i.e. VLAN, GRE tunnel, flat).
●   Physical Network – A specific network “wire” supporting a set of Quantum networks.
●   Segmentation ID – An identifier distinguishing Quantum networks of the same network
    type from each other on the same physical network (i.e. VLAN tag or tunnel ID).
●   Network Mechanism – A host networking facility that can provide access to networks of
    one or more network type (i.e. OVS or Linux bridging).
●   Network Segment – A portion of a network implemented with a particular network type
    and associated details.
Current Plugin Capabilities
Plugin          Tenant Network     Provider Network    Network
                Types              Types               Mechanisms
openvswitch     VLAN, GRE, local   VLAN, GRE, flat,    Open vSwitch
                                   local               contolled locally via
                                                       agent
linuxbridge     VLAN, local        VLAN, flat, local   Linux bridging
                                                       controlled via agent
nec             Trema, PFC         Not implemented     Trema, PFC
                                                       controllers with
                                                       agent to discover
                                                       ports
ryu             Ryu                Not implemented     Ryu controller
nvp             NVP                Not implemented     Nicira NVP
                                                       controller
cisco           VLAN               Not implemented     UCS, Nexus
                                                       switches, sub-
                                                       plugins
metaplugin      sub-plugins        sub-plugins         sub-plugins
Problem Statement
●   Current Quantum plugins each support a single L2 networking technology
    –   Typically a specific OpenFlow controller
    –   Focus is on isolating L2 tenant networks
●   Also need to access provider networks
    –   External networks
    –   Existing data center networks
●   May have a mixture of networking technologies and mechanisms supporting them
    –   Different systems accessing the same VLAN trunks via Linux bridging, OVS, and Cisco UCS
    –   Combination of legacy networking and SDN in the data center
    –   Physical appliances (LB, firewall, routing, VPN, etc.)
●   Quantum needs to support multiple L2 networking technologies simultaneously
    –   Multiple types of networks
    –   Multiple mechanism to access a network type
    –   Networks made up of multiple segments, possibly of different types
Options
●   Monolithic plugin
    –   Pick a plugin that supports everything you need
    –   Add provider network capabilities to controller-based plugins
        ●
            Via controller?
        ●
            Via parallel mechanism (L2 agent, bridging, OVS)?
●   Meta-plugin
    –   Support multiple Quantum plugins simultaneously
    –   Several possible semantics:
        ●
            Each network belongs to exactly one sub-plugin
        ●   Each network created in all sub-plugins
●   Modular Plugin
    –   Separate the network type from the mechanism a system uses to access that network type
    –   Drivers for network types
    –   Drivers for mechanisms
Meta-plugin Overview
●   Wraps multiple real plugins
    –   Gary coined name “rosetta-plugin”
●   Flavor extension
    –   “flavor:network” attribute on network identifies implementing plugin
    –   Similar “flavor:router” attribute on router
●   DB table
    –   Maps network and router IDs to implementing plugin
●   Plugin operation
    –   Network/router create – dispatch to plugin named by flavor or use default,
        record flavor mapping
    –   Other network/router operations – read flavor mapping, dispatch to
        implementing plugin
●   Agent wrappers?
●   MetaInterfaceDriver for VIFs
Meta-Plugin Limitations
●
    Considered “experimental” in Folsom
●
    Tightly coupled to plugin implementations
    –   Requires that they all inherit QuantumDBPluginV2, use same inherited
        DB tables
    –   Can't have conflicting DB tables
    –   L3 must be compatible
●
    Cannot make same network accessible via multiple plugins
    –   Common with data center provider networks
    –   Could be required for VLAN tenant networks
●   flavor:router extended attribute not visible because router is an
    extension
Cisco Plugin(s)
●   Readme at http://wiki.openstack.org/cisco-quantum:
    –   “A reference implementation for a Quantum Plugin Framework”
    –   “Supports use of multiple L2 technologies”
●   Main network_plugin.PluginV2 + sub-plugins
    –   Supports Cisco UCS, Nexus switch, openvswitch
    –   Seems to delegate most calls to all sub-plugins, not just one
        “owning” network like in meta-plugin
●   Several Quantum API extensions
●   Tightly coupled with openvswitch plugin
Is there a better approach?
●   Existing linuxbridge and openvswitch plugins are almost clones
    –   DB schemata for VLAN, flat, and local networks could be identical
    –   Support for GRE networks in openvswitch only meaningful difference in plugin
    –   Very little work to make linuxbridge agent work with openvswitch plugin
    –   Agents also have much similar code
●   Most plugin functionality is specific to the supported network types, not to the
    mechanism used to access the network type
    –   DB schema
    –   Tenant network pooling and allocation
    –   Provider attribute validation
●   So far, all provider network types can be represented by network_type,
    physical_network, segmentation_id tuple
●   Should be able to add new technologies/mechanisms be supported without having
    to write a whole new plugin
Modular L2 Plugin & Agent
●   Make multiple networking technologies work together
    –   Single plugin and L2 agent for all network types and network mechanisms where an
        external controller doesn't “own the world”
●   Pluggable network type drivers in server
    –   Each supports a single network type (i.e. VLAN, GRE, flat, local, VXLAN, ...)
    –   Provides any needed DB schema
    –   Validates and manages provider attributes
    –   Optionally pools/allocates tenant networks
●   Pluggable mechanism drivers in L2 agent
    –   Execute network administration commands to realize supported network types using a
        specific networking mechanism (OVS, Linux bridging, ...)
●   Pluggable mechanism drivers in server
    –   Interface with external network controllers
    –   Remotely manage OVS on nodes
Modular L2 Proposed Plan
●   Start simple
    –   Merge openvswitch and linuxbridge functionality
    –   Refactor into modular drivers for network types and agent-based mechanisms
●   Avoid short-term risk
    –   Maintain existing plugins throughout Grizzly cycle
    –   Possibly deprecate non-modular openvswitch and linuxbridge plugins at Grizzly release
●   Evolve
    –   Add support for multiple-segment networks
    –   Add server-based drivers
        ●   OpenFlow controller drivers
        ●   UCS, Nexus drivers
        ●   Remote OVS driver?
    –   Improve Nova integration
    –   Add provisioning API extension
    –   Add scheduler support for non-uniform connectivity
●   Meta-plugin future?
●   Monolithic plugins' future?
Plugin/Agent/Driver Interactions
●   Define driver APIs via abstract classes implemented by drivers
●   Provider network creation
    –   Dispatch to network type driver based on specified network type
    –   Driver validates parameters, manages DB
●   Tenant network creation
    –   Network type drivers optionally support allocation
    –   Can use pools if needed
    –   Possibly support allocation of specific network types
●   Port plugging
    –   Current VIF driver approach would support single mechanism per node
        ●
            Triggered by tap device discovery, etc.
    –   Proposed Nova->Quantum call to select/configure VIF driver could support modular plugin
        picking “best” of multiple mechanisms for node to connect to segment of network
Network Segments
●   Allow single L2 network to span multiple technologies
    –   e.g. Switch connects VLAN 20 on one physical network to VLAN 30 on a
        different physical network
●   Introduce segment abstraction underneath network
    –   Network has one or more segments
    –   network_type/physical_network/segmentation_id tuple belongs to segment
        rather than network
●   Not visible in core API
    –   Usually single segment per network
    –   Create 1st segment automatically when creating network
    –   Ability to add additional segments to network
●   No immediate plan to manage bridging between segments, but could
Non-uniform Connectivity
●   Several scenarios
    –   Existing linuxbridge and openvswitch agents might not have connections to all physical
        networks
    –   Existing openvswitch agents may support GRE on some nodes but not others
    –   Modular plugin means some nodes may not have mechanisms supporting all network
        types in use
●   Possible solutions
    –   Avoid non-uniformity
    –   Align Quantum connectivity with Nova construct (cell, zone, flavor, ...?)
    –   Modular plugin answer queries from Nova Scheduler filter plug-in
●   More complex topology
    –   Segment concept so “same” L2 network can be realized in different places by different
        technologies
    –   Scheduler taking topology, latency, throughput into account
Provisioning API
●   Current openvswitch and linuxbridge plugins use per-node
    config to map physical networks to bridges/interfaces
●   Modular plugin needs to understand connectivity
    –   pick segment and mechanism to use when port plugged
    –   support Nova scheduling
●   Move mappings to server?
    –   Provisioning/management API
    –   Store in DB
    –   Use profile to avoid duplication across identical nodes
Discussion
●   Does this make sense at least for openvswitch
    and linuxbridge?
●   Is there interest in drivers for OpenFlow
    controllers in this framework?
●   Can/should Cisco UCS and Nexus support be
    refactored as drivers in this framework?
●   Is metaplugin still needed for some scenatios?

Más contenido relacionado

La actualidad más candente

Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
 
Flexible NFV WAN interconnections with Neutron BGP VPN
 Flexible NFV WAN interconnections with Neutron BGP VPN Flexible NFV WAN interconnections with Neutron BGP VPN
Flexible NFV WAN interconnections with Neutron BGP VPNThomas Morin
 
Quantum grizzly summit
Quantum   grizzly summitQuantum   grizzly summit
Quantum grizzly summitDan Wendlandt
 
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack NetworkingONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networkingmarkmcclain
 
Openstack Neutron Insights
Openstack Neutron InsightsOpenstack Neutron Insights
Openstack Neutron InsightsAtul Pandey
 
Quantum PTL Update - Grizzly Summit.pptx
Quantum PTL Update - Grizzly Summit.pptxQuantum PTL Update - Grizzly Summit.pptx
Quantum PTL Update - Grizzly Summit.pptxOpenStack Foundation
 
Open stack networking_101_part-1
Open stack networking_101_part-1Open stack networking_101_part-1
Open stack networking_101_part-1yfauser
 
Quantum for Cloud Operators - Folsom Conference
Quantum for Cloud Operators  - Folsom Conference Quantum for Cloud Operators  - Folsom Conference
Quantum for Cloud Operators - Folsom Conference Dan Wendlandt
 
Nova for Physicalization and Virtualization compute models
Nova for Physicalization and Virtualization compute modelsNova for Physicalization and Virtualization compute models
Nova for Physicalization and Virtualization compute modelsopenstackindia
 
BGP Dynamic Routing and Neutron
BGP Dynamic Routing and NeutronBGP Dynamic Routing and Neutron
BGP Dynamic Routing and Neutronrktidwell
 
Simplifying the OpenStack and Kubernetes network stack with Romana
Simplifying the OpenStack and Kubernetes network stack with RomanaSimplifying the OpenStack and Kubernetes network stack with Romana
Simplifying the OpenStack and Kubernetes network stack with RomanaJuergen Brendel
 
Midokura OpenStack Meetup Taipei
Midokura OpenStack Meetup TaipeiMidokura OpenStack Meetup Taipei
Midokura OpenStack Meetup TaipeiDan Mihai Dumitriu
 
Quantum Folsom Summit Developer Overview
Quantum Folsom Summit Developer OverviewQuantum Folsom Summit Developer Overview
Quantum Folsom Summit Developer OverviewDan Wendlandt
 
Openstack Neutron & Interconnections with BGP/MPLS VPNs
Openstack Neutron & Interconnections with BGP/MPLS VPNsOpenstack Neutron & Interconnections with BGP/MPLS VPNs
Openstack Neutron & Interconnections with BGP/MPLS VPNsThomas Morin
 
Openstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNsOpenstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNsThomas Morin
 
SDN, Network Virtualization and the Software Defined Data Center – Brad Hedlund
SDN, Network Virtualization and the Software Defined Data Center – Brad HedlundSDN, Network Virtualization and the Software Defined Data Center – Brad Hedlund
SDN, Network Virtualization and the Software Defined Data Center – Brad HedlundChef Software, Inc.
 
Next Generation Network Developer Skills
Next Generation Network Developer SkillsNext Generation Network Developer Skills
Next Generation Network Developer Skillsmestery
 
Midokura @ OpenStack Seattle
Midokura @ OpenStack SeattleMidokura @ OpenStack Seattle
Midokura @ OpenStack SeattleCynthia Thomas
 
What's the deal with Neutron?
What's the deal with Neutron?What's the deal with Neutron?
What's the deal with Neutron?Cynthia Thomas
 
Open stack networking_101_part-2_tech_deep_dive
Open stack networking_101_part-2_tech_deep_diveOpen stack networking_101_part-2_tech_deep_dive
Open stack networking_101_part-2_tech_deep_diveyfauser
 

La actualidad más candente (20)

Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
 
Flexible NFV WAN interconnections with Neutron BGP VPN
 Flexible NFV WAN interconnections with Neutron BGP VPN Flexible NFV WAN interconnections with Neutron BGP VPN
Flexible NFV WAN interconnections with Neutron BGP VPN
 
Quantum grizzly summit
Quantum   grizzly summitQuantum   grizzly summit
Quantum grizzly summit
 
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack NetworkingONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
ONUG Tutorial: Bridges and Tunnels Drive Through OpenStack Networking
 
Openstack Neutron Insights
Openstack Neutron InsightsOpenstack Neutron Insights
Openstack Neutron Insights
 
Quantum PTL Update - Grizzly Summit.pptx
Quantum PTL Update - Grizzly Summit.pptxQuantum PTL Update - Grizzly Summit.pptx
Quantum PTL Update - Grizzly Summit.pptx
 
Open stack networking_101_part-1
Open stack networking_101_part-1Open stack networking_101_part-1
Open stack networking_101_part-1
 
Quantum for Cloud Operators - Folsom Conference
Quantum for Cloud Operators  - Folsom Conference Quantum for Cloud Operators  - Folsom Conference
Quantum for Cloud Operators - Folsom Conference
 
Nova for Physicalization and Virtualization compute models
Nova for Physicalization and Virtualization compute modelsNova for Physicalization and Virtualization compute models
Nova for Physicalization and Virtualization compute models
 
BGP Dynamic Routing and Neutron
BGP Dynamic Routing and NeutronBGP Dynamic Routing and Neutron
BGP Dynamic Routing and Neutron
 
Simplifying the OpenStack and Kubernetes network stack with Romana
Simplifying the OpenStack and Kubernetes network stack with RomanaSimplifying the OpenStack and Kubernetes network stack with Romana
Simplifying the OpenStack and Kubernetes network stack with Romana
 
Midokura OpenStack Meetup Taipei
Midokura OpenStack Meetup TaipeiMidokura OpenStack Meetup Taipei
Midokura OpenStack Meetup Taipei
 
Quantum Folsom Summit Developer Overview
Quantum Folsom Summit Developer OverviewQuantum Folsom Summit Developer Overview
Quantum Folsom Summit Developer Overview
 
Openstack Neutron & Interconnections with BGP/MPLS VPNs
Openstack Neutron & Interconnections with BGP/MPLS VPNsOpenstack Neutron & Interconnections with BGP/MPLS VPNs
Openstack Neutron & Interconnections with BGP/MPLS VPNs
 
Openstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNsOpenstack Neutron, interconnections with BGP/MPLS VPNs
Openstack Neutron, interconnections with BGP/MPLS VPNs
 
SDN, Network Virtualization and the Software Defined Data Center – Brad Hedlund
SDN, Network Virtualization and the Software Defined Data Center – Brad HedlundSDN, Network Virtualization and the Software Defined Data Center – Brad Hedlund
SDN, Network Virtualization and the Software Defined Data Center – Brad Hedlund
 
Next Generation Network Developer Skills
Next Generation Network Developer SkillsNext Generation Network Developer Skills
Next Generation Network Developer Skills
 
Midokura @ OpenStack Seattle
Midokura @ OpenStack SeattleMidokura @ OpenStack Seattle
Midokura @ OpenStack Seattle
 
What's the deal with Neutron?
What's the deal with Neutron?What's the deal with Neutron?
What's the deal with Neutron?
 
Open stack networking_101_part-2_tech_deep_dive
Open stack networking_101_part-2_tech_deep_diveOpen stack networking_101_part-2_tech_deep_dive
Open stack networking_101_part-2_tech_deep_dive
 

Similar a Modular Quantum L2 Plugin and Agent

neutron_icehouse_update
neutron_icehouse_updateneutron_icehouse_update
neutron_icehouse_updateAkihiro Motoki
 
Open stack networking_101_update_2014
Open stack networking_101_update_2014Open stack networking_101_update_2014
Open stack networking_101_update_2014yfauser
 
Quantum - The Network Mechanics
Quantum - The Network MechanicsQuantum - The Network Mechanics
Quantum - The Network MechanicsKiran Murari
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and AutomationAdam Johnson
 
Kubernetes networking in AWS
Kubernetes networking in AWSKubernetes networking in AWS
Kubernetes networking in AWSZvika Gazit
 
Neutron behind the scenes
Neutron   behind the scenesNeutron   behind the scenes
Neutron behind the scenesinbroker
 
Quantum - Virtual networks for Openstack
Quantum - Virtual networks for OpenstackQuantum - Virtual networks for Openstack
Quantum - Virtual networks for Openstacksalv_orlando
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingThomas Graf
 
Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingDigicomp Academy AG
 
Openstack Quantum yahoo meetup 1 23-13
Openstack Quantum yahoo meetup 1 23-13Openstack Quantum yahoo meetup 1 23-13
Openstack Quantum yahoo meetup 1 23-13Dan Wendlandt
 
Network virtualization with open stack quantum
Network virtualization with open stack quantumNetwork virtualization with open stack quantum
Network virtualization with open stack quantumMiguel Lavalle
 
Openstack Networking Internals - first part
Openstack Networking Internals - first partOpenstack Networking Internals - first part
Openstack Networking Internals - first partlilliput12
 
OpenStack Neutron 201 1hr
OpenStack Neutron 201 1hr OpenStack Neutron 201 1hr
OpenStack Neutron 201 1hr David Lenwell
 
Neutrondev ppt
Neutrondev pptNeutrondev ppt
Neutrondev pptmarunewby
 
CloudStack Networking Overview - Jan 28, 2014
CloudStack Networking Overview - Jan 28, 2014CloudStack Networking Overview - Jan 28, 2014
CloudStack Networking Overview - Jan 28, 2014Sheng Yang
 
OpenStack networking (Neutron)
OpenStack networking (Neutron) OpenStack networking (Neutron)
OpenStack networking (Neutron) CREATE-NET
 
Advanced network services insertions framework
Advanced network services insertions frameworkAdvanced network services insertions framework
Advanced network services insertions frameworksalv_orlando
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Dan Mihai Dumitriu
 

Similar a Modular Quantum L2 Plugin and Agent (20)

neutron_icehouse_update
neutron_icehouse_updateneutron_icehouse_update
neutron_icehouse_update
 
Open stack networking_101_update_2014
Open stack networking_101_update_2014Open stack networking_101_update_2014
Open stack networking_101_update_2014
 
Quantum - The Network Mechanics
Quantum - The Network MechanicsQuantum - The Network Mechanics
Quantum - The Network Mechanics
 
OpenStack Quantum
OpenStack QuantumOpenStack Quantum
OpenStack Quantum
 
OpenStack Networking and Automation
OpenStack Networking and AutomationOpenStack Networking and Automation
OpenStack Networking and Automation
 
CloudStack and SDN
CloudStack and SDNCloudStack and SDN
CloudStack and SDN
 
Kubernetes networking in AWS
Kubernetes networking in AWSKubernetes networking in AWS
Kubernetes networking in AWS
 
Neutron behind the scenes
Neutron   behind the scenesNeutron   behind the scenes
Neutron behind the scenes
 
Quantum - Virtual networks for Openstack
Quantum - Virtual networks for OpenstackQuantum - Virtual networks for Openstack
Quantum - Virtual networks for Openstack
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center Networking
 
Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined Networking
 
Openstack Quantum yahoo meetup 1 23-13
Openstack Quantum yahoo meetup 1 23-13Openstack Quantum yahoo meetup 1 23-13
Openstack Quantum yahoo meetup 1 23-13
 
Network virtualization with open stack quantum
Network virtualization with open stack quantumNetwork virtualization with open stack quantum
Network virtualization with open stack quantum
 
Openstack Networking Internals - first part
Openstack Networking Internals - first partOpenstack Networking Internals - first part
Openstack Networking Internals - first part
 
OpenStack Neutron 201 1hr
OpenStack Neutron 201 1hr OpenStack Neutron 201 1hr
OpenStack Neutron 201 1hr
 
Neutrondev ppt
Neutrondev pptNeutrondev ppt
Neutrondev ppt
 
CloudStack Networking Overview - Jan 28, 2014
CloudStack Networking Overview - Jan 28, 2014CloudStack Networking Overview - Jan 28, 2014
CloudStack Networking Overview - Jan 28, 2014
 
OpenStack networking (Neutron)
OpenStack networking (Neutron) OpenStack networking (Neutron)
OpenStack networking (Neutron)
 
Advanced network services insertions framework
Advanced network services insertions frameworkAdvanced network services insertions framework
Advanced network services insertions framework
 
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
Midokura OpenStack Day Korea Talk: MidoNet Open Source Network Virtualization...
 

Modular Quantum L2 Plugin and Agent

  • 1. Quantum Modular L2 Plugin and Agent Bob Kukura Red Hat Grizzly Design Summit 10/15/2012
  • 2. History ● Prior to Folsom, Quantum assumed “green-field” cloud data center – Single networking technology – Uniform connectivity – No access to existing networks – Exception: Cisco plugin's sub-plugins for UCS, Nexus ● Red Hat Quantum/oVirt meeting March 2012 – Discussed Quantum “gaps” preventing use by oVirt for enterprise virtualization – Many apply to private OpenStack clouds ● Presented “Quantum in the Data Center” at Folsom design summit – Existing networks – Heterogeneous technology – Non-uniform connectivity – Deployability issues ● Folsom features – Provider networks – Metaplugin – RPC support
  • 3. Terminology ● Network – An abstract Quantum isolated L2 network whose ports can be attached to VMs, agents, etc. ● Tenant Network – A “normal” Quantum network created by a tenant. ● Provider Network – A Quantum network administratively created to map to a specific existing network in the data center. ● Network Type – The specification by which a Quantum network segment is realized “on the wire” (i.e. VLAN, GRE tunnel, flat). ● Physical Network – A specific network “wire” supporting a set of Quantum networks. ● Segmentation ID – An identifier distinguishing Quantum networks of the same network type from each other on the same physical network (i.e. VLAN tag or tunnel ID). ● Network Mechanism – A host networking facility that can provide access to networks of one or more network type (i.e. OVS or Linux bridging). ● Network Segment – A portion of a network implemented with a particular network type and associated details.
  • 4. Current Plugin Capabilities Plugin Tenant Network Provider Network Network Types Types Mechanisms openvswitch VLAN, GRE, local VLAN, GRE, flat, Open vSwitch local contolled locally via agent linuxbridge VLAN, local VLAN, flat, local Linux bridging controlled via agent nec Trema, PFC Not implemented Trema, PFC controllers with agent to discover ports ryu Ryu Not implemented Ryu controller nvp NVP Not implemented Nicira NVP controller cisco VLAN Not implemented UCS, Nexus switches, sub- plugins metaplugin sub-plugins sub-plugins sub-plugins
  • 5. Problem Statement ● Current Quantum plugins each support a single L2 networking technology – Typically a specific OpenFlow controller – Focus is on isolating L2 tenant networks ● Also need to access provider networks – External networks – Existing data center networks ● May have a mixture of networking technologies and mechanisms supporting them – Different systems accessing the same VLAN trunks via Linux bridging, OVS, and Cisco UCS – Combination of legacy networking and SDN in the data center – Physical appliances (LB, firewall, routing, VPN, etc.) ● Quantum needs to support multiple L2 networking technologies simultaneously – Multiple types of networks – Multiple mechanism to access a network type – Networks made up of multiple segments, possibly of different types
  • 6. Options ● Monolithic plugin – Pick a plugin that supports everything you need – Add provider network capabilities to controller-based plugins ● Via controller? ● Via parallel mechanism (L2 agent, bridging, OVS)? ● Meta-plugin – Support multiple Quantum plugins simultaneously – Several possible semantics: ● Each network belongs to exactly one sub-plugin ● Each network created in all sub-plugins ● Modular Plugin – Separate the network type from the mechanism a system uses to access that network type – Drivers for network types – Drivers for mechanisms
  • 7. Meta-plugin Overview ● Wraps multiple real plugins – Gary coined name “rosetta-plugin” ● Flavor extension – “flavor:network” attribute on network identifies implementing plugin – Similar “flavor:router” attribute on router ● DB table – Maps network and router IDs to implementing plugin ● Plugin operation – Network/router create – dispatch to plugin named by flavor or use default, record flavor mapping – Other network/router operations – read flavor mapping, dispatch to implementing plugin ● Agent wrappers? ● MetaInterfaceDriver for VIFs
  • 8. Meta-Plugin Limitations ● Considered “experimental” in Folsom ● Tightly coupled to plugin implementations – Requires that they all inherit QuantumDBPluginV2, use same inherited DB tables – Can't have conflicting DB tables – L3 must be compatible ● Cannot make same network accessible via multiple plugins – Common with data center provider networks – Could be required for VLAN tenant networks ● flavor:router extended attribute not visible because router is an extension
  • 9. Cisco Plugin(s) ● Readme at http://wiki.openstack.org/cisco-quantum: – “A reference implementation for a Quantum Plugin Framework” – “Supports use of multiple L2 technologies” ● Main network_plugin.PluginV2 + sub-plugins – Supports Cisco UCS, Nexus switch, openvswitch – Seems to delegate most calls to all sub-plugins, not just one “owning” network like in meta-plugin ● Several Quantum API extensions ● Tightly coupled with openvswitch plugin
  • 10. Is there a better approach? ● Existing linuxbridge and openvswitch plugins are almost clones – DB schemata for VLAN, flat, and local networks could be identical – Support for GRE networks in openvswitch only meaningful difference in plugin – Very little work to make linuxbridge agent work with openvswitch plugin – Agents also have much similar code ● Most plugin functionality is specific to the supported network types, not to the mechanism used to access the network type – DB schema – Tenant network pooling and allocation – Provider attribute validation ● So far, all provider network types can be represented by network_type, physical_network, segmentation_id tuple ● Should be able to add new technologies/mechanisms be supported without having to write a whole new plugin
  • 11. Modular L2 Plugin & Agent ● Make multiple networking technologies work together – Single plugin and L2 agent for all network types and network mechanisms where an external controller doesn't “own the world” ● Pluggable network type drivers in server – Each supports a single network type (i.e. VLAN, GRE, flat, local, VXLAN, ...) – Provides any needed DB schema – Validates and manages provider attributes – Optionally pools/allocates tenant networks ● Pluggable mechanism drivers in L2 agent – Execute network administration commands to realize supported network types using a specific networking mechanism (OVS, Linux bridging, ...) ● Pluggable mechanism drivers in server – Interface with external network controllers – Remotely manage OVS on nodes
  • 12. Modular L2 Proposed Plan ● Start simple – Merge openvswitch and linuxbridge functionality – Refactor into modular drivers for network types and agent-based mechanisms ● Avoid short-term risk – Maintain existing plugins throughout Grizzly cycle – Possibly deprecate non-modular openvswitch and linuxbridge plugins at Grizzly release ● Evolve – Add support for multiple-segment networks – Add server-based drivers ● OpenFlow controller drivers ● UCS, Nexus drivers ● Remote OVS driver? – Improve Nova integration – Add provisioning API extension – Add scheduler support for non-uniform connectivity ● Meta-plugin future? ● Monolithic plugins' future?
  • 13. Plugin/Agent/Driver Interactions ● Define driver APIs via abstract classes implemented by drivers ● Provider network creation – Dispatch to network type driver based on specified network type – Driver validates parameters, manages DB ● Tenant network creation – Network type drivers optionally support allocation – Can use pools if needed – Possibly support allocation of specific network types ● Port plugging – Current VIF driver approach would support single mechanism per node ● Triggered by tap device discovery, etc. – Proposed Nova->Quantum call to select/configure VIF driver could support modular plugin picking “best” of multiple mechanisms for node to connect to segment of network
  • 14. Network Segments ● Allow single L2 network to span multiple technologies – e.g. Switch connects VLAN 20 on one physical network to VLAN 30 on a different physical network ● Introduce segment abstraction underneath network – Network has one or more segments – network_type/physical_network/segmentation_id tuple belongs to segment rather than network ● Not visible in core API – Usually single segment per network – Create 1st segment automatically when creating network – Ability to add additional segments to network ● No immediate plan to manage bridging between segments, but could
  • 15. Non-uniform Connectivity ● Several scenarios – Existing linuxbridge and openvswitch agents might not have connections to all physical networks – Existing openvswitch agents may support GRE on some nodes but not others – Modular plugin means some nodes may not have mechanisms supporting all network types in use ● Possible solutions – Avoid non-uniformity – Align Quantum connectivity with Nova construct (cell, zone, flavor, ...?) – Modular plugin answer queries from Nova Scheduler filter plug-in ● More complex topology – Segment concept so “same” L2 network can be realized in different places by different technologies – Scheduler taking topology, latency, throughput into account
  • 16. Provisioning API ● Current openvswitch and linuxbridge plugins use per-node config to map physical networks to bridges/interfaces ● Modular plugin needs to understand connectivity – pick segment and mechanism to use when port plugged – support Nova scheduling ● Move mappings to server? – Provisioning/management API – Store in DB – Use profile to avoid duplication across identical nodes
  • 17. Discussion ● Does this make sense at least for openvswitch and linuxbridge? ● Is there interest in drivers for OpenFlow controllers in this framework? ● Can/should Cisco UCS and Nexus support be refactored as drivers in this framework? ● Is metaplugin still needed for some scenatios?