Packet processing in the fast path involves looking up bit patterns and deciding on an actions at line rate. The complexity of these functions at Line Rate, have been traditionally handled by ASICs and NPUs. However with the availability of faster and cheaper CPUs and hardware/software accelerations, it is possible to move these functions onto commodity hardware. This tutorial will talk about the various building blocks available to speed up packet processing both hardware based e.g. SR-IOV, RDT, QAT, VMDq, VTD and software based e.g. DPDK, Fd.io/VPP, OVS etc and give hands on lab experience on DPDK and fd.io fast path look up with following sessions. 1: Introduction to Building blocks: Sujata Tibrewala
3. AGENDA
• Why Virtualization
• A day in Life of a Network packet
• NFV Architecture and Ecosystem
• Intel’s role and Community Development
• Conclusions
3
8. Importance of
Architectural
Consistency
Market requires the same
software running across both
the physical and virtual
appliance architectures in
order to deliver a consistent,
scalable solution.
8
Private data center
Private
Enterprise
Cloud
Physical
Appliance
Non-virtualized
CPE
Physical
Appliance
Virtual
Appliance
Virtual
Appliance
vE-CPE deployed at various locations
Customer Site
Virtualization
Network Edge
Virtualization
Non-Virtualized
CPEBranch
Branch
Branch
Branch
vE-CPE
vE-CPE
vE-CPE
Non-Virtualized North-South Traffic
Virtualized East-West Traffic
Fragmented services
with higher latency
Complex (costly)
deployment/
connectivity
More difficult to
provision and
scale services
Inconsistent Architecture
(“different software architectures”)
1
2
3
Cloud
Network
Common mgmt. and
provisioning framework,
easy to operate
Fast service response,
consistent
feature/functionality
Consistent Architecture
(same software running across
physical and virtual architectures)
3
1
Low latency and
highly scalable
2
vE-CPE
NFVI-
PoP
Centralized Corporate
IT Infrastructure
9. Intel is Investing to Lead the Transformation
9
DRIVEANOPENECOSYSTEM
INTEL®NETWORKBUILDERS
COLLABORATEWITH
ENDUSERS
DELIVEROPEN
REFERENCEARCHITECTURES
Intel®
Architecture
Linux
KVM
ADVANCEOPENSOURCE
ANDSTANDARDS
INTELTECHNOLOGY
LEADERSHIP
*Other names and brands may be claimed as property of others
12. NetworkPlatformsGroup
Network Virtualization: Enables Multi-Tenancy
Hypervisor
NIC
VM VM VM
vSwitch
VM
MemoryCPU Storage
Server Virtualization Network Virtualization
Virtual Network abstraction (Network Hypervisor)
using tunnel overlays e.g. VXLAN, NVGRE, Geneve
Physical IP Network
Virtual Overlay Networks
vSwitchvSwitch
vSwitch
SDN Controller
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network 2
VM
VM
Virtual
Network N
VM
VM VM
VM
VM
VM
VMVM
VM
• Abstract physical network and overlay virtual networks over existing IP network infrastructure
• Dynamically created on demand, each tenant gets separate virtual network (and virtual appliances)
• VM migration across physical subnets/geographies
12
Latency
Scalability
Agility
Performance
13. NetworkPlatformsGroup
Firewall Load BalancerVPN
Network Services
Fixed function boxes Network Functions Virtualized on General purpose servers
MemoryCPU Storage
VNF
Hypervisor
VNFVNF
VM
VM
VM
vSwitch
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network n
NetworkFunctionVirtualizationandServiceChaining
• Virtualize Network Functions to software appliances on commercial off the shelf servers
• Automate provisioning of L4-L7 Network services in data center
• Cloud provider offers physical equivalent of network services to each tenant virtual network
13
Latency
Scalability
Agility
Performance
14. NetworkPlatformsGroup
Network Service overlay
Network Virtualization with Service overlay
Virtual Network abstraction (Network Hypervisor)
using tunnel overlays e.g. VXLAN, NVGRE, Geneve
Physical IP Network
Virtual Overlay Networks
vSwitchvSwitch
vSwitch
SDN Controller
• Service Overlays deployed over Network Virtualization
• Service Chains are dynamically created on demand, within a tenant virtual Network
• Network Services Header (NSH), Geneve are examples of protocols to enable Service overlays
VM
VM
Virtual
Network n
Service Overlay (or Service Plane)
14
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network 2
Latency
Scalability
Agility
Performance
16. PacketComingfromthehardware
16
Key elements for physical networking:
• Ethernet Port on the server – commonly called pNIC (physical NIC)
• RJ45 Cable
• Ethernet Port on the physical switch
• Uplink Port on the physical switch – connects to external network
17. PacketProcessinginVirtualnetworking
17
Key elements for virtual networking:
Ethernet Port on VM
Virtual RJ45 Cable
Ethernet Port on Virtual Switch
Uplink Port on the Virtual Switch
All these elements need to be virtualized by either:
• The operating system (KVM, XEN, HYPER V etc)
• Or The hardware should recognize virtual machines
18. Telcocloud-EndToendAgility
distributed,local,automated
18
Automated Infrastructure
Infrastructure Management and Orchestration
Optimized Workload Placement
Security Policy and Lifecycle Automation
RESTful interfaces
Telco Cloud
Automated Service Level Agreement
Unified Management Plan
Services Management & Orchestration
VNF – MEC* VNF – VRAN* VNF – IOT*VNF – EPC*
Infrastructure Orchestration Software
Services Delivery
Modernize and Virtualize
System Architecture
4:1 Workload Consolidation
Intel® VT + NFV Optimized Platforms
Resource Pool
Storage Network Compute
Infrastructure Attributes
Power Performance Security Thermals Utilization Location
VNF – Virtual Network Function, EPC- Evolved Packet Core, MEC-Mobile Edge Computing, IoT-Internet of Things
19. Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDK
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Latency
Scalability
Agility
Performance
20. Application Plane
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDKDPDK
DPDK
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual Fuunction
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Virtual Fuunction
Virtual Fuunction
Latency
Scalability
Agility
Performance
21. DCG Connectivity Group 21
Worldwide Server Market - Ethernet Port Speed Adoption Forecast
Segments
Speed of
Adoption 2016 2017 2018 2019 2020
Tier 1
Cloud DC
(>1M Servers)
10GbE 40GbE
10GbE
40GbE 50GbE
10GbE 25GbE
50GbE 100GbE
25GbE 50GbE
100GbE
50GbE 100GbE
100GbE+
50GbE 100GbE
Tier 2/3
Cloud DC
1GbE 10GbE
1GbE 10GbE
10GbE 25GbE
10GbE
25GbE 50GbE
10GbE 25GbE
50GbE 100GbE
25GbE 50GbE
50GbE 100GbE
25GbE 50GbE
Enterprise /
Premises
1GbE 10GbE
1GbE
10GbE 40GbE
1GbE 10GbE
10GbE 40GbE/50GbE
1GbE 10GbE
10GbE/40GbE 50GbE
10GbE
50GbE
10GbE
Source: Worldwide Server Market – Network Metrics Dell’Oro Group January 2017
Definitions
1GbE: Single to Multiple Port 1GbE
10GbE: Single to Multiple Port 10GbE
40GbE: Single 40GbE or Quad-Port 10GbE
25GbE: Single-Port 25GbE
50GbE: Dual-Port 25GbE or Single 50GbE
100GbE: Single 100GbE or Quad-Port 25GbE
Innovators / Early Adopters
Majority Adopters
22. DCG Connectivity Group 22
Market Dynamics in 2017 10GbE: Continued growth in ‘17
• 2016: 13.7M ports
• 2017: 16.7M ports
• Seeing demand for 4x10GbE
• SFP+ and 10GBASE-T
25/50GbE: Starting to ramp in‘17
• 2016: 25Gb/240k, 50Gb/100k
• 2017: 25Gb/1M, 50Gb/800k
40GbE: Shifting in ‘17
• T1 Cloud moves to 25GbE+
• Rest of mkt continues to grow
• 2016: 2M ports
• 2017: 1.5M ports
Sources: Dell’Oro Group, November 2016
23. DCG Connectivity Group
XL710 40GbE QSFP+
Network Virtualization
Overlays Acceleration
X520 10GbE SFP+
World’s Best Selling
10GbE CNA
X540 10GBASE-T
World’s 1st
Single Chip 10GBASE-T
XXV710 25GbE SFP28
Cloud and Network
Virtualization Overlays
X550 10GBASE-T
2nd Generation
Single Chip 10GBASE-T
X710 10GBASE-T
Quad-Port
10GBASE-T
X710 10GbE SFP+
Cloud and Network
Virtualization Overlays
EthernetAdapters
Intel® Ethernet Adapter in Market
700Series
23
500Series
Latency
Scalability
Agility
Performance
24. Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDKVirtual NIC
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM
XEN
HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Latency
Scalability
Agility
Performance
25. Kernel
Space
Driver
25
PacketProcessingKernelvs.UserSpace
User
Space
NIC
Applications
Stack
System Calls
CSRs
Interrupts
Memory (RAM)
Packet Data
Copy
Socket Buffers
(mbuf’s)
Configuration
Descriptors
Kernel Space Driver
Configuration
Descriptors
DMA
Benefit #1
Removed Data
copy from Kernel
to User Space
Benefit #2
No Interrupts
Descriptors
Mapped from Kernel
Configuration
Mapped from Kernel
Descriptor
Rings
Memory (RAM)
User Space Driver with Zero Copy
Kernel
Space
User
Space
NIC
DPDK PMD
Stack
UIO Driver
System Calls
CSRs
DPDK Enabled App
DMA
Descriptor
Rings
Socket
Buffers
(skb’s)
1
2
3
1
2
Benefit #3
Network stack can
be streamlined
and optimized
DATA
26. Benefits – Eliminating / Hiding Overheads
Interrupt
Context
Switch
Overhead
Kernel User
Overhead
Core To Thread
Scheduling
Overhead
Eliminating How?
Polling
User Mode
Driver
Pthread
Affinity
4K Paging
Overhead
PCI Bridge
I/O
Overhead
Eliminating /Hiding How?
Huge Page
Lockless Inter-core
Communication
High Throughput
Bulk Mode I/O calls
To Tackle this challenge, what kind of devices /latency we have at our disposal?
27. Last Level Cache
L2 Cache
Challenge: What if there is L1 Cache Miss and LLC Hit?
L1 Cache
Core 0
L1 Cache
Core 0
LLC Cache
40 cycle
With 40 cycles LLC Hit, How will you achieve Rx budget of 19 cycles ?
L1 Cache
Miss
How?
28. • 40 ns gets Amortized Over Multiple Descriptors
• Roughly getting back to the latency of L1 cache hit
per packet
• Similarly for packet i/o, Go For Burst Read
1. Packet
I/O Solution – Amortizing Over Multiple Descriptors
29. Last Level Cache
L2 Cache
Examine Bunch Of Descriptors At A Time
L1 Cache
Core 0
LLC Cache
40 cycle
With 8 Descriptors, 40 ns gets amortized over 8 Descriptors
Read 8 Packet Descriptors at a time
Packet Descriptor 5
Packet Descriptor 0
1. Packet
I/O
Packet Descriptor 1
Packet Descriptor 2
Packet Descriptor 3
Packet Descriptor 4
Packet Descriptor 6
Packet Descriptor 7
33. NVO – Key Data-Plane Encapsulation Protocols
Encapsulation
Protocol
Advocate Description
GRE
(Generic Routing
Encapsulation)
Cisco*
IP + GRE, Inner Payload-
Ethernet/IPV4/IPV6/NSH
STT
(Stateless Transport
Tunneling)
Nicira*
IP + TCP (like) + STT, Inner Payload-
Ethernet only
VXLAN
(Virtual Extensible LAN)
Vmware*
Cisco*
IP + UDP + VXLAN, Inner Payload-
Ethernet only
NVGRE
(Network Virtualization
using GRE)
Microsoft*
IP + Modified GRE, Inner Payload-
Ethernet only
Geneve
(Generic Network
Virtualization
Encapsulation)
VMware/Nicir
a
IP + UDP + Geneve, Inner Payload-
Ethernet/IPV4/IPV6
VXLAN-GPE
(Generic Protocol
Extension for VXLAN)
Cisco
IP + UDP + VXLAN-GPE, Inner
Payload-Ethernet/IPV4/IPV6/NSH
NSH
(Network Service
Header)
Cisco
Requires Transport Protocol, Inner
Payload-Ethernet/IPV4/IPV6
Hypervisor
Virtual Switch
Physical
Hardware
Physical IP Network
Virtual Network Abstraction using tunnel overlays
e.g. VXLAN, Geneve and NVGRE
Open Virtual Switch
Open Virtual Switch
Open Virtual Switch
Open Virtual Switch
Network Virtualization Controller e.g. VMware* NSX
Virtual Network 2
Virtual Network 3Virtual Network 1
Server Virtualization Network Virtualization
34. 34
Packets per second
software.intel.com/networking
Frame Part Minimum Frame Size Maximum Frame Size
Inter Frame Gap (9.6 ms) 12 bytes 12 bytes
MAC Preamble (+ SFD) 8 bytes 8 bytes
MAC Destination Address 6 bytes 6 bytes
MAC Source Address 6 bytes 6 bytes
MAC Type (or length) 2 bytes 2 bytes
Payload (Network PDU) 46 bytes 1,500 bytes
Check Sequence (CRC) 4 bytes 4 bytes
Total Frame Physical Size 84 bytes 1, 538 bytes
Table 1. Maximum Frame Rate and Throughput Calculations For a 1-Gb/s Ethernet Link
[1,000,000,000 b/s / (84 B * 8 b/B)] == 1,488,096 f/s (maximum rate)
35. 35
On Intel® Architecture
At 256B, an 18C CPU running 2 GHz can satisfy 100 GbE throughput as long as we stay within 751
cycles/packet
• At 512B, the budget is 1447 cycles
If we run an Instructions/clock (IPC) of ~2
• 256B = 1502 instructions
• 512B = ~2894 instructions
If the IPC is 2.5 …
• 256B = 1877 instructions
• 512B = 3617 instructions
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist
you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance.
36. 36
OpenvSwitchWithDPDK–Performance
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are
measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other
information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. E5-2658 (2.1GHz,
8C/16T) DP; PCH: Patsburg; LLC: 20MB; 16 x 10GbE Gen2 x8; 4 memory channels per socket @ 1600MT/s, 8 memory channels total; DPDK 1.3.0-154 E5-2658v2 (2.4GHz, 10C/20T) DP; PCH:
Patsburg; LLC: 20MB; 22 x 10GbE Gen2 x8; 4 memory channels per socket @ 1867MT/s, 8 memory channels total; DPDK 1.4.0-22 *Projection data on 2 sockets extrapolated from 1S run on
Wildcat Pass system with E5-2699 v3.
37. 37
DPDKGenerationalPerformanceGains
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components,
software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of
that product when combined with other products. For more complete information visit http://www.intel.com/performance.
IPV4 L3 Forwarding Performance of 64Byte
Packets
* Other names and brands may be claimed as the property of others.
Broadwell EP System Configuration
Hardware
Platform SuperMicro® - X10DRX
CPU Intel® Xeon® Processor E5-2658 v4
Chipset Intel® C612 chipset
Sockets 2
Cores per Socket 14 (28 threads)
LL CACHE 30 MB
QPI/DMI 9.6GT/s
PCIe Gen3x8
MEMORY
DDR4 2400 MHz, 1Rx4 8GB (total 64GB), 4 Channel
per Socket
NIC
10 x Intel® Ethernet CNA XL710-QDA2PCI-Express
Gen3 x8 Dual Port 40 GbE Ethernet NIC (1x40G/card)
NIC Mbps 40,000
BIOS BIOS version: 1.0c (02/12/2015)
Software
OS Debian 8.0
Kernel version 3.18.2
Other DPDK2.2.0
55
80.1
164.9
255
279.9
346.7
0
50
100
150
200
250
300
350
400
2010 (2S
WMR)
2011 (1S
SNB)
2012(2S SNB) 2013 (2S IVB) 2014 (2S
HSW)
2015 (2S
BDW)
L3FwdPerformance(MPPS)
Year
37
Gbps
53.8
Gbps
110.8
Gbps
171.4
Gbps
187.2
Gbps
233
Gbps
2010
(2S WMR)
2011
(1S SNB)
2013
(2S IVB)
2012
(2S SNB)
2015
(2S BDW)
2014
(2S HSW)
38. Software Router examples
• "Now we get 10G line rates per core with 64-byte packets and linear
performance as we add cores," said Herrell. "System shipping today
deliver almost 200G throughput for a two-socket server, and in the
routing/firewall world that is shocking because it replaces $100,000
proprietary boxes.“ Brocade Vyatta
• This document explains how Sandvine, Dell®, and Intel®, using standards-
based virtualization technologies, have achieved data plane performance
scale : 1.1 Tbps (using realistic traffic, including diverse protocols,
encapsulation and tunneling)
• Achieving 100Gbps Performance at Core with Poptrie and Kamuee Zero:
NTT communication
39. 39
Cache Monitoring Technology (CMT)
• Identify “noisy neighbors”, misbehaving or
cache-starved applications and reschedule
according to priority
• Cache Occupancy reported on per Resource
Monitoring ID (RMID) basis
Last Level Cache
Core 0 Core 1 Core n
…..Hypervis
or
VSwitch
Last Level Cache
Core 0 Core 1 Core n
…..
Cache Allocation Technology (CAT)
• Last Level Cache partitioning mechanism enabling the
separation of applications, threads, VMs, etc.
• Misbehaving threads can be isolated to increase
performance determinism
Cache Monitoring and Allocation Technologies improve
cache visibility and run-time determinism
Intel® RDT: CMT & CAT
App VSwitch
Hypervis
or
App
40. Top Networking Challenges Seen in IT
Source: TechTarget's 2015 purchasing intentions survey
40
56%Improving Network Security
44%Need More Bandwidth
34%Network Virtualization
33%Aligning IT and Corporate Goals
30%Moving Applications to the Cloud
29%Ensuring Applications Run Optimally
27%BYOD Related Access and Policy
Concerns
Intel is committed to helping solve these critical challenges
41. Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDKVirtual NIC
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
42. 42
DPDKAccelerationEnhancements
DPDK API
Traffic Gens
Pktgen, T-Rex,
Moongen, …
vSwitch
OVS, Lagopus,
…
DPDK
example
apps
AES-NI
Future
features
Event based
program
models
Threading
Models
lthreads, …
Video
Apps
EAL
MALLOC
MBUF
MEMPOOL
RING
TIMER
Core
Libraries
KNI
POWER
IVSHMEM
Platform
LPM
Classifica
tion
ACL
Classify
e1000
ixgbe
bonding
af_pkt
i40e
fm10k
Packet Access
(PMD)
ETHDEV
xenvirt
enic
ring
METER
SCHED
QoS
cxgbe
vmxnet3 virtio
PIPELINE
mlx4 memnic
others
HASH
Utilities
IP Frag
CMDLINE
JOBSTAT
KVARGS
REORDER
TABLE
Legacy DPDK
Future
accelerators
Crypto
Programmable
Classifier/Parser
HW
3rd
Party
GPU/FPGA
3rd
Party
SoC
PMD
External
mempool
manager
SoC
HW
SOC model
VNF Apps
DPDK Acceleration Enhancements
DPDK Framework
Network Stacks
libUNS, mTCP,
SeaStar,
libuinet, TLDK, …
Compression
3rd
Party
HW/SW
IPSec
DPI
Hyperscan
Proxy
Apps, …
43. 43
DPDKpathtocommunities,vendors,SP’s
Major Contributors to DPDK Open
Source Software
Fully open source software
project with a strong
development community:
http://dpdk.org
BSD licensed
DPDK is available as part of the
following OS distributions:
Version 6 +
Version 7.1 +
Version 7.1 & higher
Version 15.10 +
Version 10.1 +
Version 22 +
Open Source projects based on DPDK
mTCP, Seastar,
Pktgen,
Netflow, many
more see
backup
Intel® ONP
End Customers
Vendors
Intel® Network Builders
Fostering a vibrant ecosystem
to lead the network
transformation of tomorrow
https://networkbuilders.intel.com/solutionscatalog
FD.IO members
44. Telcocloud-EndToendAgility
distributed,local,automated
44
Automated Infrastructure
Infrastructure Management and Orchestration
Optimized Workload Placement
Security Policy and Lifecycle Automation
RESTful interfaces
Telco Cloud
Automated Service Level Agreement
Unified Management Plan
Services Management & Orchestration
VNF - MEC VNF - VRAN VNF - IOTVNF - EPC
Infrastructure Orchestration Software
Services Delivery
Modernize and Virtualize
System Architecture
4:1 Workload Consolidation
Intel® VT + NFV Optimized Platforms
Resource Pool
Storage Network Compute
Infrastructure Attributes
Power Performance Security Thermals Utilization Location
47. Intel® Confidential — INTERNAL USE ONLY
Scale Program/ Community development collaboration
47
• Logos are at approx. position on
Enablement Framework
Joint Path-
finding/ Discovery
Optimizations
PoC/ Trials/
Deployments
TOTAL: 117
Influenced partners optimizing their
solutions on Intel CPU & other
ingredients this year
TOTAL: 72 Development/selection of
End-End NFV solution use
cases with multiple partners
TOTAL: 33
Deployed: 22
Meet ups act as bringing in new
partners and let them discover
SDN/NFV ingredients on IA
Hands on Training/ IDZ
collatorals and IEMs help
support optimization
Continued help usingHands
on Training/ IDZ collatorals
and IEMs help support
individual deployments
48. Intel® Confidential — INTERNAL USE ONLY
Model 2017 to help fast track ISV solutions
Continue to Win developer mindshare to
increase adoption of IA in the NFV/SDN
space by deepening engagements via
• Live training,
• Active innovators
• Dev Mesh projectsDPDK
Quick
Assist
Open
Stack
VTune
SDN/NFV
Forum
Open
vSwitch
RDT
Fd.io
SR-IOV
VMDq
VT-D
Intel
Innovator
Dev Mesh
Live
training
ISV
support
IDZ
1300 + members
2000 + Developers trained
worldwide From 146
companies
8 active innovators
12 Dev Mesh Projects
8000+ organic users every month
51. Intel® Confidential — INTERNAL USE ONLY
• Visa payment system PoC
• Wipro: System Integrator (40 SMEs)
• In memory data base called aero spike 9 ( 9 node cluster)
• Cisco UCS 460 which has 96 Intel cores for the horse power with
DPDK to manage (5 node cluster)
• Non volatile memory and flash drive
• Requirement: Network scalability from 10G to 40G. Ability to process
messages with 13 ms Low latency with high throughput of 15,000 credit
card transactions per sec
• Project Status:
• Started in July with Developers trained via DPDK /NFV dev lab on
July 11th.
• Scheduled to deploy for evaluation by VISA by October end
2016
• Scale of Deployment : Worldwide
•
Wipro Case Study
Scale
Community
Engagement
Dev Trained
DPDK/NFV
Hands on
July 11th
Continued
participation
in meet ups
: IDZ
collaterals
for BKM s
Emails
support with
Scale team
Training feedback from Wipro Program Manager
[Ashish]The highlight of the training was hands-on approach to get familiar
with DPDK. This is critical for success of such initiatives and I understand that
this requires lot of planning (infrastructure setup). I hope that Intel continues
to organize such trainings to educate developers.
[Ashish]The community and experts has been helpful and we would reach
out for more discussions.
52. Intel® Confidential — INTERNAL USE ONLY
What are Networking Innovators doing right now?
Dharani Vilwanathan (Dev Lab winner)
Project: PerfectStream: A DPDK-based Video Gateway
About the Project:
PerfectStream is primarily a Video Gateway that receives multiple streams, stores the feed and/or relays the feed as needed in the way
the client prefers.
53. Intel® Confidential — INTERNAL USE ONLY
What are Networking Innovators doing right now?
Shivaram Mysore
Project: Deploying SDN Wired/Wireless Network
About the project: Faucet enables replacement of legacy L2/L3 Network switch with SDN functionality. Here OVS + DPDK based on
Intel x86 white box is used as the data plane (switch) with Faucet Controller managing the same.
54. Intel® Confidential — INTERNAL USE ONLY
What are our Innovators doing right now?
Sridhar Pitchai
Project: DPDK datapath for control plane traffic
Objective: Implement DPDK based data path to bypass Kernel IP stack for packets punted to CPU from vendor chip based fast path.
The FlexSwitch NOS is
currently running in some of
the world’s most demanding
networks with the same
architectural model that has
been proven by Facebook,
Amazon, and others.
55. Intel® Confidential — INTERNAL USE ONLY
Conclusions
Network Function Virtualization and Software Defines networking promises
and is transforming the industry by moving network functions from fixed
function ASICs to commodity hardware
• The answer to scalable and performant virtualization is to use software for
agility and hardware offloads for defined work loads
• Intel is working with the eco system to define best solutions both in
software and hardware to enable the eco system
• DPDK is an example of successful open source project that helps the
Industry implement Packet processing on x86 based platforms
• We are here to help you in your work and would like to help you
55
56. Intel® Confidential — INTERNAL USE ONLY
Thankyou
56
sujata.tibrewala@intel.com
@sujatatibre
57. Intel® Confidential — INTERNAL USE ONLY
Meet up partnerships Bay Area
(Total dev reach ~ 10000+)
Partner->
Members: 93
Geo: San Jose
http://www.meetup.com/sbysdnnfvcloud/
Partner
Members: 809
Geo: Santa Clara
http://www.meetup.com/SDN-Switching-Group/
Intel Developer Zone meet up
Members: 1300+
Geo: Santa Clara
http://www.meetup.com/Out-Of-The-Box-Network-Developers
<- Partner
Members: 5739
Geo: San Francisco
http://www.meetup.com/openstack/
Partner ->
Members: 2851
Geo: Santa Clara
http://www.meetup.com/openvswitch
59. Intel® Confidential — INTERNAL USE ONLY
External Developer Events
Meet Ups
TCS SDN/NFV 1 Day event
Sep 2016
Women who Code Vmware
Dpdk open vswitch hands on, Sep 2016
DPDK summit Bangalore April 2017
Notas del editor
Place at the back of the deck
Importance of Physical and Virtual product architecture consistency
Market ideally requires the same software running across both the physical and virtual appliance architectures in order to a deliver consistent, scalable solution
Network response
Feature and functionality
Performance scaling
Common management and provisioning framework
Ease of deployment
Today, many vendors are offering hybrid solutions (Physical/Virtual) to meet virtualized performance requirements
Architectural consistency enables a significantly more cohesive and streamlined solution offering
Intel influencing the transformation through a 4-part strategy (This is what SDND is all about - SDND Enables the transformation)
4 elements of the strategy feed each other are creating a strong foundation for the industry to leverage on.
Advance Open Source Open Standards
Promote and contribute to industry standards and open source solutions for interoperability
Committed to “Open” standards for a competitive market
2. Deliver Open reference Designs
Leading performance, security, open source software and reference designs
Enable industry leading manageability by exposing health, state, resource availability for optimal workload placement and configuration
3. Enable Open Ecosystem on IA
Enable TEMs/OEMs to deliver industry leading performance, power, cost, security optimized solutions
4. Collaborate on trials and deployments
Building solution experience with leading Enterprise, Telco and Cloud Service Providers and Vendors
The left side of the diagram shows what SDN means and the left shows what NFV means.
In Server virtualization, the hardware is abstracted by a hypervisor and virtual resources are presented to guest operating systems
Compute and storage resources are virtualized, pooled and provisioned automatically. Similarly, Network virtualization technology allows abstraction of the network, virtual networks can be created as an overlay on top of existing physical infrastructure. Virtual network resources can be pooled and dynamically created and provisioned any time anywhere. The virtual network configuration & policies can be stored and restored like any other virtual compute or storage resources. VM mobility can be supported across subnets and geographies.
This is very similar to VPN technology, where a user is able create overlays over Internet to connect anytime anywhere to corporate networks.
Network virtualization uses tunneling technology to tunnel Ethernet traffic over existing IP networks. E.g. NVGRE, VXLAN tunneling
Currently network virtualization is implemented in software virtual switches in hypervisors. E.g. Open vSwitch (OVS), VMware vDS, Hyper-V virtual switch. Gateway devices are used to bridge to legacy networks that do not support Network virtualization. Gateway function can be implemented in software on standard servers or in HW switches (e.g. Top of Rack switches).
Network virtualization Control/management functionality is performed by SDN controller (a.k.a. Network virtualization controller or network hypervisor), e.g. VMware NSX, Open Daylight(ODL), MS Hyper-V Network Virtualization, open virtual network (OVN), virtual networking as a service provided as part of Openstack.
The Network virtualization technology also provides complete isolation from other virtual networks running concurrently on the same physical network infrastructure, for example a company ABC (or department ABC) virtual network coexists with a company XYZ (or department XYZ) virtual network collocated in the same public (or private) cloud infrastructure. So isolation, data privacy and service assurance are very important in these deployments.
In current data centers network services such as firewall, IDS, load balancers are implemented in specialized boxes that are deployed typically in the network infrastructure. Multiple such services are chained in the physical network infrastructure.
A customer moving to public or private cloud expects to have the same services available on virtual networks as well. A data center operator or service provider wants to offer these L4-L7 applications as a service to their customers.
The network services can be virtualized and run on VMs alongside business applications; these are also called as VNFs (virtual network functions) in ETSI reference architecture.
With SDN and NFV technologies, network services can be provisioned when VMs are provisioned and the traffic can be dynamically rerouted to flow through virtual appliances (VNFs) before reaching business applications. Forwarding traffic through multiple network service functions is also called as service chaining.
So network services can be virtualized, pooled and automatically provisioned and available within a virtual network for use by a tenant.
An enterprise or medium business that gets a pool of compute resources in cloud want to create a virtual network and also want to have network services like VPN, Firewall, IDS/IPS, Load balancers provisioned within the virtual network. Network service overlay created over virtual overlay networks enables to create, provision network functions (services) and transport data between the network services. Service Function chaining enables forwarding data through multiple network services chained in a service path before delivering to the VM running business applications.
Traffic arriving into the virtual network is classified by a Service Classifier, that determines the Service Path and a Service Function Forwarder (SFF could be implemented in a virtual switch) in the service plane forwards packet data to the next service function in the chain before delivering it to VM/applications. A service orchestrator composes service chains/service graphs depending on the policies configured by the administrator.
Network services Header (NSH) is a protocol used in a service plane for carrying meta data between services, service path identifier that enables creation of a service chain and forwarding between the services. NSH header is transport independent and can be carried over any overlay transport protocol like VXLAN, VXLAN-GPE, GRE, etc.,
18
Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space.
Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network.
They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs.
Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software.
Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley.
Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies.
We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are.
Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public.
A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space.
Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network.
They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs.
Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software.
Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley.
Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies.
We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are.
Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public.
A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space.
Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network.
They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs.
Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software.
Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley.
Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies.
We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are.
Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public.
A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
Can be physical or virtual hardware (i.e. networking stack in VM).
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.[1] There are two types of interrupts: hardware interrupts and software interrupts.
Receive ring buffers are shared between the device driver and NIC.
The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow simply overwrites existing data.
There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs.
The RX ring buffer is used to store incoming packets until they can be processed by the device driver. The device driver drains the RX ring, typically via SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or “skb” to begin its journey through the kernel and up to the application which owns the relevant socket.
The TX ring buffer is used to hold outgoing packets which are destined for the wire.
These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
STT - https://tools.ietf.org/html/draft-davie-stt-01
VXLAN - http://www.rfc-editor.org/rfc/rfc7348.txt
Geneve - https://datatracker.ietf.org/doc/draft-gross-geneve/
VXLAN-GPE - http://www.ietf.org/archive/id/draft-quinn-vxlan-gpe-03.txt
NSH - https://datatracker.ietf.org/doc/draft-quinn-sfc-nsh/
GRE - http://tools.ietf.org/html/rfc2890
NVGRE - https://datatracker.ietf.org/doc/draft-sridharan-virtualization-nvgre/
OVS supported tunnel protocols - GRE, VXLAN, IPsec, GRE and VXLAN over Ipsec
NSX vswitch - VXLAN, STT, GRE
Top Message: Intel is committed to help solve the key networking challenges as highlighted by the top IT professional in the recent Tech Targets Survey.
Suggested Text:
This slide shows data from a recent TechTarget purchasing intentions survey that describes and also ties back to previous slide on the biggest challenges IT faces today.
More than half, or 56% of the 1,560 networking pros worldwide polled in the SearchNetworking study, identified Network Security as their main hurdle.
The need for more bandwidth ranked as the second-biggest challenge this year, with 44% of respondents citing it as one of their main obstacles.
Ensuring applications run optimally was at 29% which highlights the need to give application reliability and security.
The focus of this deck is to cover the following three areas in CCV model and tie back to the relevant solutions.
Improving Network Security
Addressing Need for Bandwidth
Application Delivery Optimization
[transition sentence]
We will start with the Security section in the next slide…
Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space.
Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network.
They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs.
Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software.
Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley.
Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies.
We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are.
Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public.
A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
44
Ashok will talk about ONP. ONP is a server reference architecture that brings together key hardware and open software. These ingredients are optimized for network functions virtualization (NFV), and software-defined networking (SDN) in Telecom, Enterprise, and Cloud markets.