Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
10 sdn-vir-6up
1. Network Virtualization and
Data Center Networks
263-3825-00
SDN – Network Virtualization
Qin Yin
Fall Semester 2013
1
Network Virtualization History
2
Reference: The Past, Present, and Future of Software Defined Networking. Nick Feamster, Jennifer Rexford, and Ellen Zegura.
http://gtnoise.net/papers/drafts/sdn-cacm-2013-aug22.pdf
Network Virtualization History
• Dedicated overlays for incremental deployment
– Mbone (multicast) and 6bone (IPv6)
• Multi-service networks
– Tempest project for ATM networks
• Overlays for improving the network
– Resilient Overlay Networks (RON)
• Shared experimental testbeds
– PlanetLab, Emulab, Orbit, …
• Virtualizing the network infrastructure
– Overcoming Internet impasse through virtualization
– Later testbeds like GENI, VINI, …
• Virtualization in SDN
– Open vSwitch, MiniNet, FlowVisor, Nicira NVP, …
3
Reference: The Past, Present, and Future of Software Defined Networking. Nick Feamster, Jennifer Rexford, and Ellen Zegura.
http://gtnoise.net/papers/drafts/sdn-cacm-2013-aug22.pdf
Extending networking into the
virtualization layer
Ben Pfaff, Justin Pettit, Teemu Koponen, Keith
Amidon, Martin Casado, Scott Shenker
HotNets-VIII, 2009
4
Reference: Network Virtualization, Ben Pfaff, Nicira Networks, Inc.
http://benpfaff.org/~blp/network-virt-lecture.pdf
Data Center Network Design with VMs
Machine 1 Machine 40Machine 2 . . .
“Top of Rack” Switch
One rack of machines
Aggregation Switch
other ToRs
Core Switch
other agg switches
VM VM VMup to 128 VMs each VM VM VM VM VM VM
virtual switch
(= vswitch)
Problem: Isolation
• All VMs can talk to each other by default.
• You don't want someone in engineering
screwing up the finance network. You don't
want a break-in to your production website
to allow stealing human resources data.
• Some switches have security features but:
– You bought the cheap ones instead.
– There are hundreds of switches to set up.
6
Machine 1 Machine 40Machine 2 . . .
“Top of Rack” Switch
One rack of machines
Aggregation Switch
other ToRs
Core Switch
other agg switches
VM VM VMup to 128 VMs each VM VM VM VM VM VM
virtual switch
(= vswitch)
2. Problem: Connectivity
• The VMs in a data center can name each
other by their MAC addresses (L2
addresses). This only works within a data
center.
• To access machines or VMs in another
data center, IP addresses (L3 addresses)
must be used. And those IP addresses
have to be globally routable.
7
Machine 1 Machine 40Machine 2 . . .
“Top of Rack” Switch
One rack of machines
Aggregation Switch
other ToRs
Core Switch
other agg switches
VM VM VMup to 128 VMs each VM VM VM VM VM VM
virtual switch
(= vswitch)
The Internet
L3
L2
Non-Solution: VLANs
• A VLAN partitions a physical Ethernet network into isolated
virtual Ethernet networks:
• The Internet is an L3 network. When a packet crosses the
Internet, it loses all its L2 headers, including the VLAN tag.
You lose all the isolation when your traffic crosses the
Internet.
• Other problems: limited number, static allocation.
8
Ethernet IP TCPVLAN
L2 L3 L4
Solution: Network Virtualization
9
Virtualization Layering Network Virtualization
Ethernet IP TCP
Ethernet IP TCPEthernet IP GRE
Tunneling: Separating Virtual and Physical Network
Physical Headers Virtual Headers
Virtual resource
Virtualization layer
Physical resource
Virtual Ethernet network
Tunnel
Physical Ethernet network
Path of a Packet (No Tunnel)
• A packet from one VM to another
passes through a number of switches
along the way.
• Each switch only looks at the
destination MAC address to decide
where the packet should go.
10
Machine 1 Machine 40Machine 2 . . .
“Top of Rack” Switch
One rack of machines
Aggregation Switch
other ToRs
Core Switch
other agg switches
VM VM VMup to 128 VMs each VM VM VM VM VM VM
virtual switch
(= vswitch)
Path of a Packet (Via Tunnel)
11
Machine
1
Machine
40
Machine
2 . . .
“Top of Rack” Switch
Aggregation Switch
Core Switch
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
V
M
Core Switch
Aggregation Switch
“Top of Rack” Switch
. . .
Machine
2
Machine
40
Machine
1
The Internet
routingswitching
physicalvirtual
Ethernet IP TCPEthernet IP GRE
Physical Headers Virtual Headers
Data Center 1 Data Center 2
Challenges
• Setting up the tunnels:
– After VM startup
– After VM shutdown
– After VM migration
• Handling network failures
• Monitoring
• Administration
Use a central controller to set up the tunnels.
12
3. A Network Virtualization
Distributed System
13
Machine 1 Machine 2 Machine 3 Machine 4
controller
OVS OVS OVS OVS
control protocols
VM VM VM VM VM VM VM VM VM VM VM VM
Data Center 1 Data Center 2
“Top of Rack” Switch
Aggregation Switch
Core Switch
wires The Internet
“Top of Rack” Switch
Aggregation Switch
Core Switch
Controller Duties
• Monitor:
– Physical network
– VM locations, states
• Control:
– Tunnel setup
– All packets on virtual and physical network
– Virtual/physical mapping
• Tells OVS running everywhere else what to do
14
Open vSwitch
• Ethernet switch implemented in software
• Can be remotely controlled
• Tunnels (GRE and others)
• Integrates with VMMs, e.g. XenServer, KVM
• Free and open source
openvswitch.org
15
OpenFlow protocol
• To manage the forwarding behavior of the fast path
• Flow table = ordered list of “if-then” rules:
– “If this packet comes from VM A and going to VM B, then
send it out via tunnel 42.”
• (No rule: send to controller.)
16
Ethernet switch
OVSDB protocol
• Used to manage Open vSwitch instances
• Management protocol for less time critical
configuration:
– Create many virtual switch instances
– Attach interfaces to virtual switches
– Tunnel setup
– Set QoS policies on interfaces
• Further reading about OVSDB protocol:
– http://networkheresy.com/tag/ovsdb/
17
OpenFlow in the Data Center
(One Possibility)
18
Machine 1 Machine 2 Machine 3 Machine 4
controller
OVS OVS OVS OVS
control protocols
VM VM VM VM VM VM VM VM VM VM VM VM
Data Center 1 Data Center 2
“Top of Rack” Switch
Aggregation Switch
Core Switch
wires The Internet
“Top of Rack” Switch
Aggregation Switch
Core Switch
2
1
3
4
1. VM sends packet.
2. Open vSwitch checks flow table – no
match. Sends packet to controller.
3. Controller tells OVS to set up a
tunnel to the destination and send the
packet on that tunnel.
4. OVS sends packet on the new tunnel.
5. Normal switching and routing carry
the packet to its destination in the
usual way.
The same process repeats on the other
end to send the reply back.
This is done at most on a per-”flow”
basis, and other optimizations keep it
from happening too frequently.
5
4. Open vSwitch: Design Overview
NIC NIC
Host
operating system
VM 1 VM 2 VM 3
VNIC VNIC VNICVNIC VNIC
Virtual machines
Hypervisor physical machine
Controller
ovs-vswitchd
Adminstrative
CLI/GUI
...other network elements...
Open vSwitch: Design Details
OVS kernel
module
ovs-vswitchd
NIC NIC
Hypervisor
Host
operating system
user kernel
VM 1 VM 2 VM 3
VNIC VNIC VNICVNIC VNIC
Virtual machines
Hypervisor physical machine
Controller
Open vSwitch is Fast
Bandwidth
Kernel module: > 1 Gbps
ovs-vswitchd: 100 Mbps
Controller: 10 Mbps
Latency
Kernel module: < 1 μs
ovs-vswitchd: < 1 ms
Controller < 10ms
As fast as Linux bridge
with same CPU usage
Conclusion
• Companies spread VMs across data centers.
• Ordinary networking exposes differences between
VMs in the same data center and those in different
data centers.
• Tunnels can hide the differences.
• A controller and OpenFlow switches at the edge of
the network can set up and maintain the tunnels.
22
Can the production network be
the testbed?
Rob Sherwood, Glen Gibb, Kok-Kiong Yap,
Guido Appenzeller, Martin Casado, Nick
McKeown, and Guru Parulkar
OSDI, 2010
23
Reference: Network Virtualization, Ben Pfaff, Nicira Networks, Inc.
http://benpfaff.org/~blp/network-virt-lecture.pdf
Problem
24
Good ideas rarely get deployed
Also require access to real world traffic
New services may require changes to switch software
Experimenters want to control the behavior of their network
Evaluating new network services is hard
5. Solution Overview: Network Slicing
• Divide the production network into logical slices
– Each slice/service controls its own packet forwarding
– Users pick which slice controls their traffic: opt-in
– Existing production services run in their own slice
• e.g., Spanning tree, OSPF/BGP
• Enforce strong isolation between slices
– Actions in one slice do not affect another
• Allows the (logical) testbed to mirror the production
network
– Real hardware, performance, topologies, scale, users
25
Network Slicing Architecture
A network slice is a collection of sliced switches/routers
• Data plane is unmodified
– Packets forwarded with no performance penalty
– Slicing with existing ASIC
• Transparent slicing layer
– Each slice believes it owns the data path
– Enforces isolation between slices
• i.e., rewrites, drops rules to adhere to slice policy
– Forwards exceptions to correct slice(s)
26
Slicing Policies
The policy specifies resource limits for each slice:
• Link bandwidth
• Maximum number of forwarding rules
• Fraction of switch/router CPU (based on control
traffic a particular slice controller can generate)
• FlowSpace: which packets does the slice control?
27
FlowSpace: Maps Packets to Slices
• FlowSpace is basically the
set of all possible header
values defined by the
OpenFlow tuple
• Only one controller can
ever control a particular
flowspace
– Priority solves flowspace
overlapping problem
28
Real User Traffic: Opt-In
• Allow users to Opt-In to services in real-time
– Individual flows can be delegated to a slice by a user
– Admins can add policy to slice dynamically
• Creates incentives for building high-quality services
29
FlowVisor
Web Slice
VoIP Slice
Video
Slice
All the rest
FlowVisor Implemented on OpenFlow
• Sits between switches
and controllers
• Speaks OpenFlow up and
down.
• Acts like a proxy to
switches and controllers
• Datapaths and controllers
run unmodified
30
6. How does this work?
31
PacketIn from
datapath
Who
controls
this
packet?
It this
action
allowed?
Message Handling - PacketIn
32
PacketIn
Drop if controller
is not connected.
Is
LLDP?
Send to
appropriate
slice.
Yes
Extract
match
structure
and match
FlowSpace
No
Done
Insert a drop
rule.
No
Yes
Drop if controller
is not connected.
Yes
Send to slice.
Are
actions
allowed?
Log
exception.
Nomatch
Has
packet
been send
to a slice?
No match
FlowVisor Virtualization
• Network Slice = Collection of
sliced switches, links, and
traffic or header space
• Each slice associated to a
controller
• Transparent slicing, i.e., every
slice believes it has full and
sole control of datapath
– FV enforces traffic and slice
isolation
• Controllers and switches do
not need to be modified
Not a generalized virtualization
33
FlowVisor Summary
• FlowVisor introduces the concept of a
network slice
• Originally designed to test new network
services on production traffic
• But, it’s really only a Network Slicer!
FlowVisor provides network slicing but not a
complete network virtualization.
34
Programmable Virtual Networks
From Network Slicing
To
Network Virtualization
Ali Al-Shabibi
Open Networking Laboratory
35
Reference: nvirters.org/wp-content/uploads/2013/05/Virt-July-2013-Meetup.pptx
Network Virtualization
• Decoupling the services provided by a (virtualized) network
from the physical infrastructure
• Virtual network is a “container” of network services (L2-L7)
provisioned by software
• Faithful reproduction of services provided by a physical
network
– Analogy to a VM – complete reproduction of physical machine (CPU,
memory, I/O, etc.)
36
Reference:
http://www.opennetsummit.org/pdf/2013/presentations/bruce_davie.pdf
7. What is Network Virtualization?
37
MPLSMPLS
VRF
Overlays
TRILL
VLAN
VRFVRF
VPNO l
TRILL
VPN
TRILL
None of these give you a virtual network
They merely virtualize one aspect of a
network
Topology Virtualization
• Virtual links
• Virtual nodes
• Decoupled from
physical network
Address Virtualization
• Virtual Addressing
• Maintain current
abstractions
• Add some new ones
Policy Virtualization
• Who controls what?
• What guarantees are
enforced?
Network Virtualization vs. Slicing
Slicing
• Sorry, you can’t.
• You need to discriminate
traffic of two networks with
something other than the
existing header bits
• Thus no address or complex
topology virtualization
38
Network Virtualization vs. Slicing
Slicing
• Sorry, you can’t.
• You need to discriminate
traffic of two networks with
something other than the
existing header bits
• Thus no address or complex
topology virtualization
Network Virtualization
• Virtual nets are completely
independent
• Virtual nets are distinguished
by the tenant id
• Complete address and
topology virtualization
39
Virtualization: State of the Art
• Functionality implemented at
the edge
• Use of tunneling techniques,
such as STT, VXLAN, GRE
• Network core is not available
for innovation
• Closed source controller
controls the behavior of the
network
• Provides address and topology
virtualization, but limited
policy virtualization.
• Moreover, the topology looks
like only one big switch
40
Big Switch Abstraction
E6
E2
E5
E1
E3 E4
SWITCH 1E1
E3
E2
E5
SWITCH 2
E4
E6
SWWITCHSW
Big Switch Abstraction
• A single switch greatly limits the flexibility of the network
controller
• Cannot specify your own routing policy.
• What if you want a tree topology?
42
8. OpenVirteX
Current Virtualization Solutions
• Networks are not programmable
• Functionality implemented at the
edge
• Network core is not available for
innovation
• Must provision tunnels to provide
virtual topology
• Address virtualization provided by
encapsulation
OpenVirteX
• Each virtual network is handed to a
controller for programming.
• Edge & core available for innovation
• Entire physical topology may/can be
exposed to the downstream
controller.
• Address virtualization provided by
remapping/rewriting header fields
• Both dataplanes and controllers can
be used unmodified.
43
Ultimate Goal
VM)
Network OS Network OS Network OS
Topology, address space and
control function mapping
OpenVirteX
Virtual
network
graph
Physical
network
graph
Physical network
High Level Features
• Support for more generalized network virtualization
as opposed to slicing
– Address virtualization: use extra bits or clever use of
tenant id in header
– Topology virtualization: on demand topology
• Integrate with cloud using OpenStack
• OpenVirteX is still in the design phase
45
Network Virtualization and SDN
• Network virtualization != SDN
– Predates SDN
– May use SDN, doesn’t require SDN
• Easier to virtualize an SDN switch
– Run separate controller per virtual network
– Leverage open interface to the hardware
46
Reference:
http://www.cs.princeton.edu/courses/archive/fall13/cos597E/docs/10virtualization.pptx
References
• Extending networking into the virtualization layer. Ben Pfaff, Justin Pettit,
Teemu Koponen, Keith Amidon, Martin Casado, Scott Shenker. In
proceedings of the 8th ACM Workshop on Hot Topics in Networks
(HotNets-VIII). New York City, NY, October 2009.
• Can the production network be the testbed?. Rob Sherwood, Glen Gibb,
Kok-Kiong Yap, Guido Appenzeller, Martin Casado, Nick McKeown, and
Guru Parulkar. 2010. In Proceedings of the 9th USENIX conference on
Operating systems design and implementation (OSDI'10). USENIX
Association, Berkeley, CA, USA, 1-6.
• Nikhil Handigol, Brandon Heller, Vimalkumar Jeyakumar, Bob Lantz, and
Nick McKeown. 2012. Reproducible network experiments using container-
based emulation. In Proceedings of the 8th international conference on
Emerging networking experiments and technologies (CoNEXT '12). ACM,
New York, NY, USA, 253-264.
• Network Virtualization in Multi-tenant Datacenters. Vmware Technical
Report. 2013.
47