SlideShare una empresa de Scribd logo
1 de 59
FastPacketprocessinginVNFusingDPDKandfd.io
Sujata Tibrewala, 07/06/2017
Networking Community Development Manager
Intel Developer Zone
@sujatatibre
sujata.tibrewala@intel.com
LegalNoticesandDisclaimersIntel technologies’ features and benefits depend on system configuration and may require enabled
hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer.
No computer system can be absolutely secure.
Tests document performance of components on a particular test, in specific systems. Differences in
hardware, software, or configuration will affect actual performance. Consult other sources of
information to evaluate performance as you consider your purchase. For more complete
information about performance and benchmark results, visit http://www.intel.com/performance.
Intel, the Intel logo and others are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2016 Intel Corporation.
AGENDA
• Why Virtualization
• A day in Life of a Network packet
• NFV Architecture and Ecosystem
• Intel’s role and Community Development
• Conclusions
3
WhyVirtualization
4
5
AT&T Traffic Explosion *
https://www.youtube.com/watch?v=86mFVgttYBI
Facebook Data Centre Network * Data
*wired.com (https://www.wired.com/2012/06/facebook-nc-data-center/)
5G Network demands
• Latency
• Scalability
• Agility
• Performance 5G
Latency
Scalability
Agility
Performance
Interconnect /
Switch
Processor
Crypto /
Compression
DRAM
Last Level Cache
Soft switch, Packet Processing SW
Optimizations
Interconnect /
Switch
Importance of
Architectural
Consistency
Market requires the same
software running across both
the physical and virtual
appliance architectures in
order to deliver a consistent,
scalable solution.
8
Private data center
Private
Enterprise
Cloud
Physical
Appliance
Non-virtualized
CPE
Physical
Appliance
Virtual
Appliance
Virtual
Appliance
vE-CPE deployed at various locations
Customer Site
Virtualization
Network Edge
Virtualization
Non-Virtualized
CPEBranch
Branch
Branch
Branch
vE-CPE
vE-CPE
vE-CPE
Non-Virtualized North-South Traffic
Virtualized East-West Traffic
Fragmented services
with higher latency
Complex (costly)
deployment/
connectivity
More difficult to
provision and
scale services
Inconsistent Architecture
(“different software architectures”)
1
2
3
Cloud
Network
Common mgmt. and
provisioning framework,
easy to operate
Fast service response,
consistent
feature/functionality
Consistent Architecture
(same software running across
physical and virtual architectures)
3
1
Low latency and
highly scalable
2
vE-CPE
NFVI-
PoP
Centralized Corporate
IT Infrastructure
Intel is Investing to Lead the Transformation
9
DRIVEANOPENECOSYSTEM
INTEL®NETWORKBUILDERS
COLLABORATEWITH
ENDUSERS
DELIVEROPEN
REFERENCEARCHITECTURES
Intel®
Architecture
Linux
KVM
ADVANCEOPENSOURCE
ANDSTANDARDS
INTELTECHNOLOGY
LEADERSHIP
*Other names and brands may be claimed as property of others
Is virtualization the answer ?
Software Defined Networks/Network Function
Virtualization Framework
NetworkPlatformsGroup
Network Virtualization: Enables Multi-Tenancy
Hypervisor
NIC
VM VM VM
vSwitch
VM
MemoryCPU Storage
Server Virtualization Network Virtualization
Virtual Network abstraction (Network Hypervisor)
using tunnel overlays e.g. VXLAN, NVGRE, Geneve
Physical IP Network
Virtual Overlay Networks
vSwitchvSwitch
vSwitch
SDN Controller
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network 2
VM
VM
Virtual
Network N
VM
VM VM
VM
VM
VM
VMVM
VM
• Abstract physical network and overlay virtual networks over existing IP network infrastructure
• Dynamically created on demand, each tenant gets separate virtual network (and virtual appliances)
• VM migration across physical subnets/geographies
12
Latency
Scalability
Agility
Performance
NetworkPlatformsGroup
Firewall Load BalancerVPN
Network Services
Fixed function boxes Network Functions Virtualized on General purpose servers
MemoryCPU Storage
VNF
Hypervisor
VNFVNF
VM
VM
VM
vSwitch
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network n
NetworkFunctionVirtualizationandServiceChaining
• Virtualize Network Functions to software appliances on commercial off the shelf servers
• Automate provisioning of L4-L7 Network services in data center
• Cloud provider offers physical equivalent of network services to each tenant virtual network
13
Latency
Scalability
Agility
Performance
NetworkPlatformsGroup
Network Service overlay
Network Virtualization with Service overlay
Virtual Network abstraction (Network Hypervisor)
using tunnel overlays e.g. VXLAN, NVGRE, Geneve
Physical IP Network
Virtual Overlay Networks
vSwitchvSwitch
vSwitch
SDN Controller
• Service Overlays deployed over Network Virtualization
• Service Chains are dynamically created on demand, within a tenant virtual Network
• Network Services Header (NSH), Geneve are examples of protocols to enable Service overlays
VM
VM
Virtual
Network n
Service Overlay (or Service Plane)
14
VM
VM
Virtual
Network 1
VM
VM
Virtual
Network 2
Latency
Scalability
Agility
Performance
NetworkPlatformsGroup
ADayinLifeofNetworkPacket
15
PacketComingfromthehardware
16
Key elements for physical networking:
• Ethernet Port on the server – commonly called pNIC (physical NIC)
• RJ45 Cable
• Ethernet Port on the physical switch
• Uplink Port on the physical switch – connects to external network
PacketProcessinginVirtualnetworking
17
Key elements for virtual networking:
Ethernet Port on VM
Virtual RJ45 Cable
Ethernet Port on Virtual Switch
Uplink Port on the Virtual Switch
All these elements need to be virtualized by either:
• The operating system (KVM, XEN, HYPER V etc)
• Or The hardware should recognize virtual machines
Telcocloud-EndToendAgility
distributed,local,automated
18
Automated Infrastructure
Infrastructure Management and Orchestration
Optimized Workload Placement
Security Policy and Lifecycle Automation
RESTful interfaces
Telco Cloud
Automated Service Level Agreement
Unified Management Plan
Services Management & Orchestration
VNF – MEC* VNF – VRAN* VNF – IOT*VNF – EPC*
Infrastructure Orchestration Software
Services Delivery
Modernize and Virtualize
System Architecture
4:1 Workload Consolidation
Intel® VT + NFV Optimized Platforms
Resource Pool
Storage Network Compute
Infrastructure Attributes
Power Performance Security Thermals Utilization Location
VNF – Virtual Network Function, EPC- Evolved Packet Core, MEC-Mobile Edge Computing, IoT-Internet of Things
Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDK
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Latency
Scalability
Agility
Performance
Application Plane
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDKDPDK
DPDK
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual Fuunction
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Virtual Fuunction
Virtual Fuunction
Latency
Scalability
Agility
Performance
DCG Connectivity Group 21
Worldwide Server Market - Ethernet Port Speed Adoption Forecast
Segments
Speed of
Adoption 2016 2017 2018 2019 2020
Tier 1
Cloud DC
(>1M Servers)
10GbE  40GbE
10GbE
40GbE  50GbE
10GbE  25GbE
50GbE  100GbE
25GbE  50GbE
100GbE
50GbE  100GbE
100GbE+
50GbE  100GbE
Tier 2/3
Cloud DC
1GbE  10GbE
1GbE  10GbE
10GbE  25GbE
10GbE
25GbE  50GbE
10GbE  25GbE
50GbE  100GbE
25GbE  50GbE
50GbE  100GbE
25GbE  50GbE
Enterprise /
Premises
1GbE  10GbE
1GbE
10GbE  40GbE
1GbE  10GbE
10GbE  40GbE/50GbE
1GbE  10GbE
10GbE/40GbE  50GbE
10GbE
50GbE
10GbE
Source: Worldwide Server Market – Network Metrics Dell’Oro Group January 2017
Definitions
1GbE: Single to Multiple Port 1GbE
10GbE: Single to Multiple Port 10GbE
40GbE: Single 40GbE or Quad-Port 10GbE
25GbE: Single-Port 25GbE
50GbE: Dual-Port 25GbE or Single 50GbE
100GbE: Single 100GbE or Quad-Port 25GbE
Innovators / Early Adopters
Majority Adopters
DCG Connectivity Group 22
Market Dynamics in 2017 10GbE: Continued growth in ‘17
• 2016: 13.7M ports
• 2017: 16.7M ports
• Seeing demand for 4x10GbE
• SFP+ and 10GBASE-T
25/50GbE: Starting to ramp in‘17
• 2016: 25Gb/240k, 50Gb/100k
• 2017: 25Gb/1M, 50Gb/800k
40GbE: Shifting in ‘17
• T1 Cloud moves to 25GbE+
• Rest of mkt continues to grow
• 2016: 2M ports
• 2017: 1.5M ports
Sources: Dell’Oro Group, November 2016
DCG Connectivity Group
XL710 40GbE QSFP+
Network Virtualization
Overlays Acceleration
X520 10GbE SFP+
World’s Best Selling
10GbE CNA
X540 10GBASE-T
World’s 1st
Single Chip 10GBASE-T
XXV710 25GbE SFP28
Cloud and Network
Virtualization Overlays
X550 10GBASE-T
2nd Generation
Single Chip 10GBASE-T
X710 10GBASE-T
Quad-Port
10GBASE-T
X710 10GbE SFP+
Cloud and Network
Virtualization Overlays
EthernetAdapters
Intel® Ethernet Adapter in Market
700Series
23
500Series
Latency
Scalability
Agility
Performance
Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDKVirtual NIC
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM
XEN
HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
Latency
Scalability
Agility
Performance
Kernel
Space
Driver
25
PacketProcessingKernelvs.UserSpace
User
Space
NIC
Applications
Stack
System Calls
CSRs
Interrupts
Memory (RAM)
Packet Data
Copy
Socket Buffers
(mbuf’s)
Configuration
Descriptors
Kernel Space Driver
Configuration
Descriptors
DMA
Benefit #1
Removed Data
copy from Kernel
to User Space
Benefit #2
No Interrupts
Descriptors
Mapped from Kernel
Configuration
Mapped from Kernel
Descriptor
Rings
Memory (RAM)
User Space Driver with Zero Copy
Kernel
Space
User
Space
NIC
DPDK PMD
Stack
UIO Driver
System Calls
CSRs
DPDK Enabled App
DMA
Descriptor
Rings
Socket
Buffers
(skb’s)
1
2
3
1
2
Benefit #3
Network stack can
be streamlined
and optimized
DATA
Benefits – Eliminating / Hiding Overheads
Interrupt
Context
Switch
Overhead
Kernel User
Overhead
Core To Thread
Scheduling
Overhead
Eliminating How?
Polling
User Mode
Driver
Pthread
Affinity
4K Paging
Overhead
PCI Bridge
I/O
Overhead
Eliminating /Hiding How?
Huge Page
Lockless Inter-core
Communication
High Throughput
Bulk Mode I/O calls
To Tackle this challenge, what kind of devices /latency we have at our disposal?
Last Level Cache
L2 Cache
Challenge: What if there is L1 Cache Miss and LLC Hit?
L1 Cache
Core 0
L1 Cache
Core 0
LLC Cache
40 cycle
With 40 cycles LLC Hit, How will you achieve Rx budget of 19 cycles ?
L1 Cache
Miss
How?
• 40 ns gets Amortized Over Multiple Descriptors
• Roughly getting back to the latency of L1 cache hit
per packet
• Similarly for packet i/o, Go For Burst Read
1. Packet
I/O Solution – Amortizing Over Multiple Descriptors
Last Level Cache
L2 Cache
Examine Bunch Of Descriptors At A Time
L1 Cache
Core 0
LLC Cache
40 cycle
With 8 Descriptors, 40 ns gets amortized over 8 Descriptors
Read 8 Packet Descriptors at a time
Packet Descriptor 5
Packet Descriptor 0
1. Packet
I/O
Packet Descriptor 1
Packet Descriptor 2
Packet Descriptor 3
Packet Descriptor 4
Packet Descriptor 6
Packet Descriptor 7
30
Packet processing
software.intel.com/networking
Packet sanity -> CRC calculation
VLAN tag present ? -> TPID
Ingress/Egress VLAN Filtering -> MAC + Port+ VLAN membership
SRC learning -> MAC + PORT
DEST MAC Learning -> MAC + port
QOS -> PCP
Packet processing TCP/IP
31software.intel.com/networking
NFV Packet processing explosion
software.intel.com/networking 32
NVO – Key Data-Plane Encapsulation Protocols
Encapsulation
Protocol
Advocate Description
GRE
(Generic Routing
Encapsulation)
Cisco*
IP + GRE, Inner Payload-
Ethernet/IPV4/IPV6/NSH
STT
(Stateless Transport
Tunneling)
Nicira*
IP + TCP (like) + STT, Inner Payload-
Ethernet only
VXLAN
(Virtual Extensible LAN)
Vmware*
Cisco*
IP + UDP + VXLAN, Inner Payload-
Ethernet only
NVGRE
(Network Virtualization
using GRE)
Microsoft*
IP + Modified GRE, Inner Payload-
Ethernet only
Geneve
(Generic Network
Virtualization
Encapsulation)
VMware/Nicir
a
IP + UDP + Geneve, Inner Payload-
Ethernet/IPV4/IPV6
VXLAN-GPE
(Generic Protocol
Extension for VXLAN)
Cisco
IP + UDP + VXLAN-GPE, Inner
Payload-Ethernet/IPV4/IPV6/NSH
NSH
(Network Service
Header)
Cisco
Requires Transport Protocol, Inner
Payload-Ethernet/IPV4/IPV6
Hypervisor
Virtual Switch
Physical
Hardware
Physical IP Network
Virtual Network Abstraction using tunnel overlays
e.g. VXLAN, Geneve and NVGRE
Open Virtual Switch
Open Virtual Switch
Open Virtual Switch
Open Virtual Switch
Network Virtualization Controller e.g. VMware* NSX
Virtual Network 2
Virtual Network 3Virtual Network 1
Server Virtualization Network Virtualization
34
Packets per second
software.intel.com/networking
Frame Part Minimum Frame Size Maximum Frame Size
Inter Frame Gap (9.6 ms) 12 bytes 12 bytes
MAC Preamble (+ SFD) 8 bytes 8 bytes
MAC Destination Address 6 bytes 6 bytes
MAC Source Address 6 bytes 6 bytes
MAC Type (or length) 2 bytes 2 bytes
Payload (Network PDU) 46 bytes 1,500 bytes
Check Sequence (CRC) 4 bytes 4 bytes
Total Frame Physical Size 84 bytes 1, 538 bytes
Table 1. Maximum Frame Rate and Throughput Calculations For a 1-Gb/s Ethernet Link
[1,000,000,000 b/s / (84 B * 8 b/B)] == 1,488,096 f/s (maximum rate)
35
On Intel® Architecture
At 256B, an 18C CPU running 2 GHz can satisfy 100 GbE throughput as long as we stay within 751
cycles/packet
• At 512B, the budget is 1447 cycles
If we run an Instructions/clock (IPC) of ~2
• 256B = 1502 instructions
• 512B = ~2894 instructions
If the IPC is 2.5 …
• 256B = 1877 instructions
• 512B = 3617 instructions
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using
specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist
you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance.
36
OpenvSwitchWithDPDK–Performance
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are
measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other
information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. E5-2658 (2.1GHz,
8C/16T) DP; PCH: Patsburg; LLC: 20MB; 16 x 10GbE Gen2 x8; 4 memory channels per socket @ 1600MT/s, 8 memory channels total; DPDK 1.3.0-154 E5-2658v2 (2.4GHz, 10C/20T) DP; PCH:
Patsburg; LLC: 20MB; 22 x 10GbE Gen2 x8; 4 memory channels per socket @ 1867MT/s, 8 memory channels total; DPDK 1.4.0-22 *Projection data on 2 sockets extrapolated from 1S run on
Wildcat Pass system with E5-2699 v3.
37
DPDKGenerationalPerformanceGains
Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components,
software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of
that product when combined with other products. For more complete information visit http://www.intel.com/performance.
IPV4 L3 Forwarding Performance of 64Byte
Packets
* Other names and brands may be claimed as the property of others.
Broadwell EP System Configuration
Hardware
Platform SuperMicro® - X10DRX
CPU Intel® Xeon® Processor E5-2658 v4
Chipset Intel® C612 chipset
Sockets 2
Cores per Socket 14 (28 threads)
LL CACHE 30 MB
QPI/DMI 9.6GT/s
PCIe Gen3x8
MEMORY
DDR4 2400 MHz, 1Rx4 8GB (total 64GB), 4 Channel
per Socket
NIC
10 x Intel® Ethernet CNA XL710-QDA2PCI-Express
Gen3 x8 Dual Port 40 GbE Ethernet NIC (1x40G/card)
NIC Mbps 40,000
BIOS BIOS version: 1.0c (02/12/2015)
Software
OS Debian 8.0
Kernel version 3.18.2
Other DPDK2.2.0
55
80.1
164.9
255
279.9
346.7
0
50
100
150
200
250
300
350
400
2010 (2S
WMR)
2011 (1S
SNB)
2012(2S SNB) 2013 (2S IVB) 2014 (2S
HSW)
2015 (2S
BDW)
L3FwdPerformance(MPPS)
Year
37
Gbps
53.8
Gbps
110.8
Gbps
171.4
Gbps
187.2
Gbps
233
Gbps
2010
(2S WMR)
2011
(1S SNB)
2013
(2S IVB)
2012
(2S SNB)
2015
(2S BDW)
2014
(2S HSW)
Software Router examples
• "Now we get 10G line rates per core with 64-byte packets and linear
performance as we add cores," said Herrell. "System shipping today
deliver almost 200G throughput for a two-socket server, and in the
routing/firewall world that is shocking because it replaces $100,000
proprietary boxes.“ Brocade Vyatta
• This document explains how Sandvine, Dell®, and Intel®, using standards-
based virtualization technologies, have achieved data plane performance
scale : 1.1 Tbps (using realistic traffic, including diverse protocols,
encapsulation and tunneling)
• Achieving 100Gbps Performance at Core with Poptrie and Kamuee Zero:
NTT communication
39
Cache Monitoring Technology (CMT)
• Identify “noisy neighbors”, misbehaving or
cache-starved applications and reschedule
according to priority
• Cache Occupancy reported on per Resource
Monitoring ID (RMID) basis
Last Level Cache
Core 0 Core 1 Core n
…..Hypervis
or
VSwitch
Last Level Cache
Core 0 Core 1 Core n
…..
Cache Allocation Technology (CAT)
• Last Level Cache partitioning mechanism enabling the
separation of applications, threads, VMs, etc.
• Misbehaving threads can be isolated to increase
performance determinism
Cache Monitoring and Allocation Technologies improve
cache visibility and run-time determinism
Intel® RDT: CMT & CAT
App VSwitch
Hypervis
or
App
Top Networking Challenges Seen in IT
Source: TechTarget's 2015 purchasing intentions survey
40
56%Improving Network Security
44%Need More Bandwidth
34%Network Virtualization
33%Aligning IT and Corporate Goals
30%Moving Applications to the Cloud
29%Ensuring Applications Run Optimally
27%BYOD Related Access and Policy
Concerns
Intel is committed to helping solve these critical challenges
Virtualisation Technology
Orchestration
Infrastructure Layer / Data Plane
Intel Architecture NFV/SDN Accelerators
VT-d
SR-IOV
Virtual Machine Monitor(VMM)/Hypervisor
OpenStack
L2 VNF
Applianc
e
L2 VNF
Applianc
e
L3 VNF
Applianc
e
Control Plane
OpenContrail
Open Daylight
ONOS
DPDK
DPDK
DPDKVirtual NIC
VMDq
NIC
Silicon
NIC
Silicon QAT
Chipset
Acceleration
Hyperscan
KVM XEN HYPER-V QEMU
Virtual NIC Virtual NIC
Microsoft azure
RDT
IA CPU
NIC
Silicon
Virtual Switch
Amazon EC2
L3 VNF
Applianc
e
DPDK
Virtual NIC
Security
VNF
Applianc
e
DPDK
Virtual NIC
DPDK
V
Fd.io Legopus Open vSwitch
POF OpenSwitch
BESS
DPDK
Virtual Switch
CloudStack
Open Shift Google Compute Engine
Security
VNF
Applianc
e
DPDK
Virtual NIC
VMM/
Hypervisor
42
DPDKAccelerationEnhancements
DPDK API
Traffic Gens
Pktgen, T-Rex,
Moongen, …
vSwitch
OVS, Lagopus,
…
DPDK
example
apps
AES-NI
Future
features
Event based
program
models
Threading
Models
lthreads, …
Video
Apps
EAL
MALLOC
MBUF
MEMPOOL
RING
TIMER
Core
Libraries
KNI
POWER
IVSHMEM
Platform
LPM
Classifica
tion
ACL
Classify
e1000
ixgbe
bonding
af_pkt
i40e
fm10k
Packet Access
(PMD)
ETHDEV
xenvirt
enic
ring
METER
SCHED
QoS
cxgbe
vmxnet3 virtio
PIPELINE
mlx4 memnic
others
HASH
Utilities
IP Frag
CMDLINE
JOBSTAT
KVARGS
REORDER
TABLE
Legacy DPDK
Future
accelerators
Crypto
Programmable
Classifier/Parser
HW
3rd
Party
GPU/FPGA
3rd
Party
SoC
PMD
External
mempool
manager
SoC
HW
SOC model
VNF Apps
DPDK Acceleration Enhancements
DPDK Framework
Network Stacks
libUNS, mTCP,
SeaStar,
libuinet, TLDK, …
Compression
3rd
Party
HW/SW
IPSec
DPI
Hyperscan
Proxy
Apps, …
43
DPDKpathtocommunities,vendors,SP’s
Major Contributors to DPDK Open
Source Software
Fully open source software
project with a strong
development community:
http://dpdk.org
BSD licensed
DPDK is available as part of the
following OS distributions:
Version 6 +
Version 7.1 +
Version 7.1 & higher
Version 15.10 +
Version 10.1 +
Version 22 +
Open Source projects based on DPDK
mTCP, Seastar,
Pktgen,
Netflow, many
more see
backup
Intel® ONP
End Customers
Vendors
Intel® Network Builders
Fostering a vibrant ecosystem
to lead the network
transformation of tomorrow
https://networkbuilders.intel.com/solutionscatalog
FD.IO members
Telcocloud-EndToendAgility
distributed,local,automated
44
Automated Infrastructure
Infrastructure Management and Orchestration
Optimized Workload Placement
Security Policy and Lifecycle Automation
RESTful interfaces
Telco Cloud
Automated Service Level Agreement
Unified Management Plan
Services Management & Orchestration
VNF - MEC VNF - VRAN VNF - IOTVNF - EPC
Infrastructure Orchestration Software
Services Delivery
Modernize and Virtualize
System Architecture
4:1 Workload Consolidation
Intel® VT + NFV Optimized Platforms
Resource Pool
Storage Network Compute
Infrastructure Attributes
Power Performance Security Thermals Utilization Location
OPNFVColorado3.0Release
Intel’sRoleINCommunity
Development
46
Intel® Confidential — INTERNAL USE ONLY
Scale Program/ Community development collaboration
47
• Logos are at approx. position on
Enablement Framework
Joint Path-
finding/ Discovery
Optimizations
PoC/ Trials/
Deployments
TOTAL: 117
Influenced partners optimizing their
solutions on Intel CPU & other
ingredients this year
TOTAL: 72 Development/selection of
End-End NFV solution use
cases with multiple partners
TOTAL: 33
Deployed: 22
Meet ups act as bringing in new
partners and let them discover
SDN/NFV ingredients on IA
Hands on Training/ IDZ
collatorals and IEMs help
support optimization
Continued help usingHands
on Training/ IDZ collatorals
and IEMs help support
individual deployments
Intel® Confidential — INTERNAL USE ONLY
Model 2017 to help fast track ISV solutions
Continue to Win developer mindshare to
increase adoption of IA in the NFV/SDN
space by deepening engagements via
• Live training,
• Active innovators
• Dev Mesh projectsDPDK
Quick
Assist
Open
Stack
VTune
SDN/NFV
Forum
Open
vSwitch
RDT
Fd.io
SR-IOV
VMDq
VT-D
Intel
Innovator
Dev Mesh
Live
training
ISV
support
IDZ
1300 + members
 2000 + Developers trained
worldwide From 146
companies
 8 active innovators
 12 Dev Mesh Projects
8000+ organic users every month
Intel® Confidential — INTERNAL USE ONLY
YourRoleINTheCommunity
49
Intel® Confidential — INTERNAL USE ONLY
50
Intel® Confidential — INTERNAL USE ONLY
• Visa payment system PoC
• Wipro: System Integrator (40 SMEs)
• In memory data base called aero spike 9 ( 9 node cluster)
• Cisco UCS 460 which has 96 Intel cores for the horse power with
DPDK to manage (5 node cluster)
• Non volatile memory and flash drive
• Requirement: Network scalability from 10G to 40G. Ability to process
messages with 13 ms Low latency with high throughput of 15,000 credit
card transactions per sec
• Project Status:
• Started in July with Developers trained via DPDK /NFV dev lab on
July 11th.
• Scheduled to deploy for evaluation by VISA by October end
2016
• Scale of Deployment : Worldwide
•
Wipro Case Study
Scale
Community
Engagement
Dev Trained
DPDK/NFV
Hands on
July 11th
Continued
participation
in meet ups
: IDZ
collaterals
for BKM s
Emails
support with
Scale team
Training feedback from Wipro Program Manager
[Ashish]The highlight of the training was hands-on approach to get familiar
with DPDK. This is critical for success of such initiatives and I understand that
this requires lot of planning (infrastructure setup). I hope that Intel continues
to organize such trainings to educate developers.
[Ashish]The community and experts has been helpful and we would reach
out for more discussions.
Intel® Confidential — INTERNAL USE ONLY
What are Networking Innovators doing right now?
Dharani Vilwanathan (Dev Lab winner)
Project: PerfectStream: A DPDK-based Video Gateway
About the Project:
PerfectStream is primarily a Video Gateway that receives multiple streams, stores the feed and/or relays the feed as needed in the way
the client prefers.
Intel® Confidential — INTERNAL USE ONLY
What are Networking Innovators doing right now?
Shivaram Mysore
Project: Deploying SDN Wired/Wireless Network
About the project: Faucet enables replacement of legacy L2/L3 Network switch with SDN functionality. Here OVS + DPDK based on
Intel x86 white box is used as the data plane (switch) with Faucet Controller managing the same.
Intel® Confidential — INTERNAL USE ONLY
What are our Innovators doing right now?
Sridhar Pitchai
Project: DPDK datapath for control plane traffic
Objective: Implement DPDK based data path to bypass Kernel IP stack for packets punted to CPU from vendor chip based fast path.
The FlexSwitch NOS is
currently running in some of
the world’s most demanding
networks with the same
architectural model that has
been proven by Facebook,
Amazon, and others.
Intel® Confidential — INTERNAL USE ONLY
Conclusions
Network Function Virtualization and Software Defines networking promises
and is transforming the industry by moving network functions from fixed
function ASICs to commodity hardware
• The answer to scalable and performant virtualization is to use software for
agility and hardware offloads for defined work loads
• Intel is working with the eco system to define best solutions both in
software and hardware to enable the eco system
• DPDK is an example of successful open source project that helps the
Industry implement Packet processing on x86 based platforms
• We are here to help you in your work and would like to help you
55
Intel® Confidential — INTERNAL USE ONLY
Thankyou
56
sujata.tibrewala@intel.com
@sujatatibre
Intel® Confidential — INTERNAL USE ONLY
Meet up partnerships Bay Area
(Total dev reach ~ 10000+)
Partner->
Members: 93
Geo: San Jose
http://www.meetup.com/sbysdnnfvcloud/
Partner
Members: 809
Geo: Santa Clara
http://www.meetup.com/SDN-Switching-Group/
Intel Developer Zone meet up
Members: 1300+
Geo: Santa Clara
http://www.meetup.com/Out-Of-The-Box-Network-Developers
<- Partner
Members: 5739
Geo: San Francisco
http://www.meetup.com/openstack/
Partner ->
Members: 2851
Geo: Santa Clara
http://www.meetup.com/openvswitch
Intel® Confidential — INTERNAL USE ONLY
IDZ Meet ups Worldwide
Partner->
Members: 645
Geo: Dublin
https://www.meetup.com/OpenStack-Ireland/
Members: 264
Geo: Bangalore
https://www.meetup.com/SDN-NFV-Meetup/
Members: 67
Geo: Portland
https://www.meetup.com/Out-Of-The-Box-Network-Developers-PDX/
Intel® Confidential — INTERNAL USE ONLY
External Developer Events
Meet Ups
TCS SDN/NFV 1 Day event
Sep 2016
Women who Code Vmware
Dpdk open vswitch hands on, Sep 2016
DPDK summit Bangalore April 2017

Más contenido relacionado

La actualidad más candente

LF_DPDK17_ OpenVswitch hardware offload over DPDK
LF_DPDK17_ OpenVswitch hardware offload over DPDKLF_DPDK17_ OpenVswitch hardware offload over DPDK
LF_DPDK17_ OpenVswitch hardware offload over DPDK
LF_DPDK
 

La actualidad más candente (20)

FD.io Vector Packet Processing (VPP)
FD.io Vector Packet Processing (VPP)FD.io Vector Packet Processing (VPP)
FD.io Vector Packet Processing (VPP)
 
DPDK Summit 2015 - Intel - Keith Wiles
DPDK Summit 2015 - Intel - Keith WilesDPDK Summit 2015 - Intel - Keith Wiles
DPDK Summit 2015 - Intel - Keith Wiles
 
DPDK & Layer 4 Packet Processing
DPDK & Layer 4 Packet ProcessingDPDK & Layer 4 Packet Processing
DPDK & Layer 4 Packet Processing
 
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitchDPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
DPDK Summit - 08 Sept 2014 - NTT - High Performance vSwitch
 
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
 
Scaling the Container Dataplane
Scaling the Container Dataplane Scaling the Container Dataplane
Scaling the Container Dataplane
 
Dpdk Validation - Liu, Yong
Dpdk Validation - Liu, YongDpdk Validation - Liu, Yong
Dpdk Validation - Liu, Yong
 
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel ArchitectureDPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
DPDK Summit - 08 Sept 2014 - Intel - Networking Workloads on Intel Architecture
 
DPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim MortsolfDPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim Mortsolf
 
DPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun RajagopalDPDK Summit 2015 - Sprint - Arun Rajagopal
DPDK Summit 2015 - Sprint - Arun Rajagopal
 
Quieting noisy neighbor with Intel® Resource Director Technology
Quieting noisy neighbor with Intel® Resource Director TechnologyQuieting noisy neighbor with Intel® Resource Director Technology
Quieting noisy neighbor with Intel® Resource Director Technology
 
1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw1 intro to_dpdk_and_hw
1 intro to_dpdk_and_hw
 
Intel® RDT Hands-on Lab
Intel® RDT Hands-on LabIntel® RDT Hands-on Lab
Intel® RDT Hands-on Lab
 
DPDK Summit 2015 - HP - Al Sanders
DPDK Summit 2015 - HP - Al SandersDPDK Summit 2015 - HP - Al Sanders
DPDK Summit 2015 - HP - Al Sanders
 
LF_DPDK17_ OpenVswitch hardware offload over DPDK
LF_DPDK17_ OpenVswitch hardware offload over DPDKLF_DPDK17_ OpenVswitch hardware offload over DPDK
LF_DPDK17_ OpenVswitch hardware offload over DPDK
 
TLDK - FD.io Sept 2016
TLDK - FD.io Sept 2016 TLDK - FD.io Sept 2016
TLDK - FD.io Sept 2016
 
Intel® Ethernet Update
Intel® Ethernet Update Intel® Ethernet Update
Intel® Ethernet Update
 
DPDK IPSec Security Gateway Application
DPDK IPSec Security Gateway ApplicationDPDK IPSec Security Gateway Application
DPDK IPSec Security Gateway Application
 
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
Software Network Data Plane - Satisfying the need for speed - FD.io - VPP and...
 
DPDK summit 2015: It's kind of fun to do the impossible with DPDK
DPDK summit 2015: It's kind of fun  to do the impossible with DPDKDPDK summit 2015: It's kind of fun  to do the impossible with DPDK
DPDK summit 2015: It's kind of fun to do the impossible with DPDK
 

Similar a Netsft2017 day in_life_of_nfv

Banv meetup-contrail
Banv meetup-contrailBanv meetup-contrail
Banv meetup-contrail
nvirters
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center Networking
Thomas Graf
 

Similar a Netsft2017 day in_life_of_nfv (20)

Building the SD-Branch using uCPE
Building the SD-Branch using uCPEBuilding the SD-Branch using uCPE
Building the SD-Branch using uCPE
 
Enabling NFV features in kubernetes
Enabling NFV features in kubernetesEnabling NFV features in kubernetes
Enabling NFV features in kubernetes
 
NFV and SDN: 4G LTE and 5G Wireless Networks on Intel(r) Architecture
NFV and SDN: 4G LTE and 5G Wireless Networks on Intel(r) ArchitectureNFV and SDN: 4G LTE and 5G Wireless Networks on Intel(r) Architecture
NFV and SDN: 4G LTE and 5G Wireless Networks on Intel(r) Architecture
 
G rpc talk with intel (3)
G rpc talk with intel (3)G rpc talk with intel (3)
G rpc talk with intel (3)
 
 Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation
 
Banv meetup-contrail
Banv meetup-contrailBanv meetup-contrail
Banv meetup-contrail
 
Network Function Virtualization (NFV) BoF
Network Function Virtualization (NFV) BoFNetwork Function Virtualization (NFV) BoF
Network Function Virtualization (NFV) BoF
 
Virtual firewall framework
Virtual firewall frameworkVirtual firewall framework
Virtual firewall framework
 
Framework for the New IP - Phil O'Reilly
Framework for the New IP - Phil O'ReillyFramework for the New IP - Phil O'Reilly
Framework for the New IP - Phil O'Reilly
 
Turbocharge the NFV Data Plane in the SDN Era - a Radisys presentation
Turbocharge the NFV Data Plane in the SDN Era - a Radisys presentationTurbocharge the NFV Data Plane in the SDN Era - a Radisys presentation
Turbocharge the NFV Data Plane in the SDN Era - a Radisys presentation
 
Contrail Enabler for agile cloud services
Contrail Enabler for agile cloud servicesContrail Enabler for agile cloud services
Contrail Enabler for agile cloud services
 
Hyper-V Networking
Hyper-V NetworkingHyper-V Networking
Hyper-V Networking
 
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
 
SDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center NetworkingSDN & NFV Introduction - Open Source Data Center Networking
SDN & NFV Introduction - Open Source Data Center Networking
 
Network Virtualization & Software-defined Networking
Network Virtualization & Software-defined NetworkingNetwork Virtualization & Software-defined Networking
Network Virtualization & Software-defined Networking
 
Mini-Track: Lessons from Public Cloud
Mini-Track: Lessons from Public CloudMini-Track: Lessons from Public Cloud
Mini-Track: Lessons from Public Cloud
 
5G Multi-Access Edge Compute
5G Multi-Access Edge Compute5G Multi-Access Edge Compute
5G Multi-Access Edge Compute
 
Approaching hyperconvergedopenstack
Approaching hyperconvergedopenstackApproaching hyperconvergedopenstack
Approaching hyperconvergedopenstack
 
5G Core Network - ZTE 5g Cloude ServCore
5G Core Network - ZTE 5g Cloude ServCore5G Core Network - ZTE 5g Cloude ServCore
5G Core Network - ZTE 5g Cloude ServCore
 
Colt SD-WAN experience learnings and future plans
Colt SD-WAN experience learnings and future plansColt SD-WAN experience learnings and future plans
Colt SD-WAN experience learnings and future plans
 

Último

DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
jaanualu31
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
Kamal Acharya
 

Último (20)

DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills KuwaitKuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
Kuwait City MTP kit ((+919101817206)) Buy Abortion Pills Kuwait
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best ServiceTamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
Tamil Call Girls Bhayandar WhatsApp +91-9930687706, Best Service
 
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLEGEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
GEAR TRAIN- BASIC CONCEPTS AND WORKING PRINCIPLE
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 

Netsft2017 day in_life_of_nfv

  • 1. FastPacketprocessinginVNFusingDPDKandfd.io Sujata Tibrewala, 07/06/2017 Networking Community Development Manager Intel Developer Zone @sujatatibre sujata.tibrewala@intel.com
  • 2. LegalNoticesandDisclaimersIntel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. No computer system can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance. Intel, the Intel logo and others are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. © 2016 Intel Corporation.
  • 3. AGENDA • Why Virtualization • A day in Life of a Network packet • NFV Architecture and Ecosystem • Intel’s role and Community Development • Conclusions 3
  • 5. 5 AT&T Traffic Explosion * https://www.youtube.com/watch?v=86mFVgttYBI
  • 6. Facebook Data Centre Network * Data *wired.com (https://www.wired.com/2012/06/facebook-nc-data-center/)
  • 7. 5G Network demands • Latency • Scalability • Agility • Performance 5G Latency Scalability Agility Performance Interconnect / Switch Processor Crypto / Compression DRAM Last Level Cache Soft switch, Packet Processing SW Optimizations Interconnect / Switch
  • 8. Importance of Architectural Consistency Market requires the same software running across both the physical and virtual appliance architectures in order to deliver a consistent, scalable solution. 8 Private data center Private Enterprise Cloud Physical Appliance Non-virtualized CPE Physical Appliance Virtual Appliance Virtual Appliance vE-CPE deployed at various locations Customer Site Virtualization Network Edge Virtualization Non-Virtualized CPEBranch Branch Branch Branch vE-CPE vE-CPE vE-CPE Non-Virtualized North-South Traffic Virtualized East-West Traffic Fragmented services with higher latency Complex (costly) deployment/ connectivity More difficult to provision and scale services Inconsistent Architecture (“different software architectures”) 1 2 3 Cloud Network Common mgmt. and provisioning framework, easy to operate Fast service response, consistent feature/functionality Consistent Architecture (same software running across physical and virtual architectures) 3 1 Low latency and highly scalable 2 vE-CPE NFVI- PoP Centralized Corporate IT Infrastructure
  • 9. Intel is Investing to Lead the Transformation 9 DRIVEANOPENECOSYSTEM INTEL®NETWORKBUILDERS COLLABORATEWITH ENDUSERS DELIVEROPEN REFERENCEARCHITECTURES Intel® Architecture Linux KVM ADVANCEOPENSOURCE ANDSTANDARDS INTELTECHNOLOGY LEADERSHIP *Other names and brands may be claimed as property of others
  • 11. Software Defined Networks/Network Function Virtualization Framework
  • 12. NetworkPlatformsGroup Network Virtualization: Enables Multi-Tenancy Hypervisor NIC VM VM VM vSwitch VM MemoryCPU Storage Server Virtualization Network Virtualization Virtual Network abstraction (Network Hypervisor) using tunnel overlays e.g. VXLAN, NVGRE, Geneve Physical IP Network Virtual Overlay Networks vSwitchvSwitch vSwitch SDN Controller VM VM Virtual Network 1 VM VM Virtual Network 2 VM VM Virtual Network N VM VM VM VM VM VM VMVM VM • Abstract physical network and overlay virtual networks over existing IP network infrastructure • Dynamically created on demand, each tenant gets separate virtual network (and virtual appliances) • VM migration across physical subnets/geographies 12 Latency Scalability Agility Performance
  • 13. NetworkPlatformsGroup Firewall Load BalancerVPN Network Services Fixed function boxes Network Functions Virtualized on General purpose servers MemoryCPU Storage VNF Hypervisor VNFVNF VM VM VM vSwitch VM VM Virtual Network 1 VM VM Virtual Network n NetworkFunctionVirtualizationandServiceChaining • Virtualize Network Functions to software appliances on commercial off the shelf servers • Automate provisioning of L4-L7 Network services in data center • Cloud provider offers physical equivalent of network services to each tenant virtual network 13 Latency Scalability Agility Performance
  • 14. NetworkPlatformsGroup Network Service overlay Network Virtualization with Service overlay Virtual Network abstraction (Network Hypervisor) using tunnel overlays e.g. VXLAN, NVGRE, Geneve Physical IP Network Virtual Overlay Networks vSwitchvSwitch vSwitch SDN Controller • Service Overlays deployed over Network Virtualization • Service Chains are dynamically created on demand, within a tenant virtual Network • Network Services Header (NSH), Geneve are examples of protocols to enable Service overlays VM VM Virtual Network n Service Overlay (or Service Plane) 14 VM VM Virtual Network 1 VM VM Virtual Network 2 Latency Scalability Agility Performance
  • 16. PacketComingfromthehardware 16 Key elements for physical networking: • Ethernet Port on the server – commonly called pNIC (physical NIC) • RJ45 Cable • Ethernet Port on the physical switch • Uplink Port on the physical switch – connects to external network
  • 17. PacketProcessinginVirtualnetworking 17 Key elements for virtual networking: Ethernet Port on VM Virtual RJ45 Cable Ethernet Port on Virtual Switch Uplink Port on the Virtual Switch All these elements need to be virtualized by either: • The operating system (KVM, XEN, HYPER V etc) • Or The hardware should recognize virtual machines
  • 18. Telcocloud-EndToendAgility distributed,local,automated 18 Automated Infrastructure Infrastructure Management and Orchestration Optimized Workload Placement Security Policy and Lifecycle Automation RESTful interfaces Telco Cloud Automated Service Level Agreement Unified Management Plan Services Management & Orchestration VNF – MEC* VNF – VRAN* VNF – IOT*VNF – EPC* Infrastructure Orchestration Software Services Delivery Modernize and Virtualize System Architecture 4:1 Workload Consolidation Intel® VT + NFV Optimized Platforms Resource Pool Storage Network Compute Infrastructure Attributes Power Performance Security Thermals Utilization Location VNF – Virtual Network Function, EPC- Evolved Packet Core, MEC-Mobile Edge Computing, IoT-Internet of Things
  • 19. Virtualisation Technology Orchestration Infrastructure Layer / Data Plane Intel Architecture NFV/SDN Accelerators VT-d SR-IOV Virtual Machine Monitor(VMM)/Hypervisor OpenStack L2 VNF Applianc e L2 VNF Applianc e L3 VNF Applianc e Control Plane OpenContrail Open Daylight ONOS DPDK DPDK DPDK VMDq NIC Silicon NIC Silicon QAT Chipset Acceleration Hyperscan KVM XEN HYPER-V QEMU Virtual NIC Virtual NIC Microsoft azure RDT IA CPU NIC Silicon Virtual Switch Amazon EC2 L3 VNF Applianc e DPDK Virtual NIC Security VNF Applianc e DPDK Virtual NIC DPDK V Fd.io Legopus Open vSwitch POF OpenSwitch BESS DPDK Virtual Switch CloudStack Open Shift Google Compute Engine Security VNF Applianc e DPDK Virtual NIC VMM/ Hypervisor Latency Scalability Agility Performance
  • 20. Application Plane Orchestration Infrastructure Layer / Data Plane Intel Architecture NFV/SDN Accelerators VT-d SR-IOV Virtual Machine Monitor(VMM)/Hypervisor OpenStack L2 VNF Applianc e L2 VNF Applianc e L3 VNF Applianc e Control Plane OpenContrail Open Daylight ONOS DPDKDPDK DPDK VMDq NIC Silicon NIC Silicon QAT Chipset Acceleration Hyperscan KVM XEN HYPER-V QEMU Virtual Fuunction Microsoft azure RDT IA CPU NIC Silicon Virtual Switch Amazon EC2 L3 VNF Applianc e DPDK Security VNF Applianc e DPDK Virtual NIC DPDK V Fd.io Legopus Open vSwitch POF OpenSwitch BESS DPDK Virtual Switch CloudStack Open Shift Google Compute Engine Security VNF Applianc e DPDK Virtual NIC VMM/ Hypervisor Virtual Fuunction Virtual Fuunction Latency Scalability Agility Performance
  • 21. DCG Connectivity Group 21 Worldwide Server Market - Ethernet Port Speed Adoption Forecast Segments Speed of Adoption 2016 2017 2018 2019 2020 Tier 1 Cloud DC (>1M Servers) 10GbE  40GbE 10GbE 40GbE  50GbE 10GbE  25GbE 50GbE  100GbE 25GbE  50GbE 100GbE 50GbE  100GbE 100GbE+ 50GbE  100GbE Tier 2/3 Cloud DC 1GbE  10GbE 1GbE  10GbE 10GbE  25GbE 10GbE 25GbE  50GbE 10GbE  25GbE 50GbE  100GbE 25GbE  50GbE 50GbE  100GbE 25GbE  50GbE Enterprise / Premises 1GbE  10GbE 1GbE 10GbE  40GbE 1GbE  10GbE 10GbE  40GbE/50GbE 1GbE  10GbE 10GbE/40GbE  50GbE 10GbE 50GbE 10GbE Source: Worldwide Server Market – Network Metrics Dell’Oro Group January 2017 Definitions 1GbE: Single to Multiple Port 1GbE 10GbE: Single to Multiple Port 10GbE 40GbE: Single 40GbE or Quad-Port 10GbE 25GbE: Single-Port 25GbE 50GbE: Dual-Port 25GbE or Single 50GbE 100GbE: Single 100GbE or Quad-Port 25GbE Innovators / Early Adopters Majority Adopters
  • 22. DCG Connectivity Group 22 Market Dynamics in 2017 10GbE: Continued growth in ‘17 • 2016: 13.7M ports • 2017: 16.7M ports • Seeing demand for 4x10GbE • SFP+ and 10GBASE-T 25/50GbE: Starting to ramp in‘17 • 2016: 25Gb/240k, 50Gb/100k • 2017: 25Gb/1M, 50Gb/800k 40GbE: Shifting in ‘17 • T1 Cloud moves to 25GbE+ • Rest of mkt continues to grow • 2016: 2M ports • 2017: 1.5M ports Sources: Dell’Oro Group, November 2016
  • 23. DCG Connectivity Group XL710 40GbE QSFP+ Network Virtualization Overlays Acceleration X520 10GbE SFP+ World’s Best Selling 10GbE CNA X540 10GBASE-T World’s 1st Single Chip 10GBASE-T XXV710 25GbE SFP28 Cloud and Network Virtualization Overlays X550 10GBASE-T 2nd Generation Single Chip 10GBASE-T X710 10GBASE-T Quad-Port 10GBASE-T X710 10GbE SFP+ Cloud and Network Virtualization Overlays EthernetAdapters Intel® Ethernet Adapter in Market 700Series 23 500Series Latency Scalability Agility Performance
  • 24. Virtualisation Technology Orchestration Infrastructure Layer / Data Plane Intel Architecture NFV/SDN Accelerators VT-d SR-IOV Virtual Machine Monitor(VMM)/Hypervisor OpenStack L2 VNF Applianc e L2 VNF Applianc e L3 VNF Applianc e Control Plane OpenContrail Open Daylight ONOS DPDK DPDK DPDKVirtual NIC VMDq NIC Silicon NIC Silicon QAT Chipset Acceleration Hyperscan KVM XEN HYPER-V QEMU Virtual NIC Virtual NIC Microsoft azure RDT IA CPU NIC Silicon Virtual Switch Amazon EC2 L3 VNF Applianc e DPDK Virtual NIC Security VNF Applianc e DPDK Virtual NIC DPDK V Fd.io Legopus Open vSwitch POF OpenSwitch BESS DPDK Virtual Switch CloudStack Open Shift Google Compute Engine Security VNF Applianc e DPDK Virtual NIC VMM/ Hypervisor Latency Scalability Agility Performance
  • 25. Kernel Space Driver 25 PacketProcessingKernelvs.UserSpace User Space NIC Applications Stack System Calls CSRs Interrupts Memory (RAM) Packet Data Copy Socket Buffers (mbuf’s) Configuration Descriptors Kernel Space Driver Configuration Descriptors DMA Benefit #1 Removed Data copy from Kernel to User Space Benefit #2 No Interrupts Descriptors Mapped from Kernel Configuration Mapped from Kernel Descriptor Rings Memory (RAM) User Space Driver with Zero Copy Kernel Space User Space NIC DPDK PMD Stack UIO Driver System Calls CSRs DPDK Enabled App DMA Descriptor Rings Socket Buffers (skb’s) 1 2 3 1 2 Benefit #3 Network stack can be streamlined and optimized DATA
  • 26. Benefits – Eliminating / Hiding Overheads Interrupt Context Switch Overhead Kernel User Overhead Core To Thread Scheduling Overhead Eliminating How? Polling User Mode Driver Pthread Affinity 4K Paging Overhead PCI Bridge I/O Overhead Eliminating /Hiding How? Huge Page Lockless Inter-core Communication High Throughput Bulk Mode I/O calls To Tackle this challenge, what kind of devices /latency we have at our disposal?
  • 27. Last Level Cache L2 Cache Challenge: What if there is L1 Cache Miss and LLC Hit? L1 Cache Core 0 L1 Cache Core 0 LLC Cache 40 cycle With 40 cycles LLC Hit, How will you achieve Rx budget of 19 cycles ? L1 Cache Miss How?
  • 28. • 40 ns gets Amortized Over Multiple Descriptors • Roughly getting back to the latency of L1 cache hit per packet • Similarly for packet i/o, Go For Burst Read 1. Packet I/O Solution – Amortizing Over Multiple Descriptors
  • 29. Last Level Cache L2 Cache Examine Bunch Of Descriptors At A Time L1 Cache Core 0 LLC Cache 40 cycle With 8 Descriptors, 40 ns gets amortized over 8 Descriptors Read 8 Packet Descriptors at a time Packet Descriptor 5 Packet Descriptor 0 1. Packet I/O Packet Descriptor 1 Packet Descriptor 2 Packet Descriptor 3 Packet Descriptor 4 Packet Descriptor 6 Packet Descriptor 7
  • 30. 30 Packet processing software.intel.com/networking Packet sanity -> CRC calculation VLAN tag present ? -> TPID Ingress/Egress VLAN Filtering -> MAC + Port+ VLAN membership SRC learning -> MAC + PORT DEST MAC Learning -> MAC + port QOS -> PCP
  • 32. NFV Packet processing explosion software.intel.com/networking 32
  • 33. NVO – Key Data-Plane Encapsulation Protocols Encapsulation Protocol Advocate Description GRE (Generic Routing Encapsulation) Cisco* IP + GRE, Inner Payload- Ethernet/IPV4/IPV6/NSH STT (Stateless Transport Tunneling) Nicira* IP + TCP (like) + STT, Inner Payload- Ethernet only VXLAN (Virtual Extensible LAN) Vmware* Cisco* IP + UDP + VXLAN, Inner Payload- Ethernet only NVGRE (Network Virtualization using GRE) Microsoft* IP + Modified GRE, Inner Payload- Ethernet only Geneve (Generic Network Virtualization Encapsulation) VMware/Nicir a IP + UDP + Geneve, Inner Payload- Ethernet/IPV4/IPV6 VXLAN-GPE (Generic Protocol Extension for VXLAN) Cisco IP + UDP + VXLAN-GPE, Inner Payload-Ethernet/IPV4/IPV6/NSH NSH (Network Service Header) Cisco Requires Transport Protocol, Inner Payload-Ethernet/IPV4/IPV6 Hypervisor Virtual Switch Physical Hardware Physical IP Network Virtual Network Abstraction using tunnel overlays e.g. VXLAN, Geneve and NVGRE Open Virtual Switch Open Virtual Switch Open Virtual Switch Open Virtual Switch Network Virtualization Controller e.g. VMware* NSX Virtual Network 2 Virtual Network 3Virtual Network 1 Server Virtualization Network Virtualization
  • 34. 34 Packets per second software.intel.com/networking Frame Part Minimum Frame Size Maximum Frame Size Inter Frame Gap (9.6 ms) 12 bytes 12 bytes MAC Preamble (+ SFD) 8 bytes 8 bytes MAC Destination Address 6 bytes 6 bytes MAC Source Address 6 bytes 6 bytes MAC Type (or length) 2 bytes 2 bytes Payload (Network PDU) 46 bytes 1,500 bytes Check Sequence (CRC) 4 bytes 4 bytes Total Frame Physical Size 84 bytes 1, 538 bytes Table 1. Maximum Frame Rate and Throughput Calculations For a 1-Gb/s Ethernet Link [1,000,000,000 b/s / (84 B * 8 b/B)] == 1,488,096 f/s (maximum rate)
  • 35. 35 On Intel® Architecture At 256B, an 18C CPU running 2 GHz can satisfy 100 GbE throughput as long as we stay within 751 cycles/packet • At 512B, the budget is 1447 cycles If we run an Instructions/clock (IPC) of ~2 • 256B = 1502 instructions • 512B = ~2894 instructions If the IPC is 2.5 … • 256B = 1877 instructions • 512B = 3617 instructions Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance.
  • 36. 36 OpenvSwitchWithDPDK–Performance Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. E5-2658 (2.1GHz, 8C/16T) DP; PCH: Patsburg; LLC: 20MB; 16 x 10GbE Gen2 x8; 4 memory channels per socket @ 1600MT/s, 8 memory channels total; DPDK 1.3.0-154 E5-2658v2 (2.4GHz, 10C/20T) DP; PCH: Patsburg; LLC: 20MB; 22 x 10GbE Gen2 x8; 4 memory channels per socket @ 1867MT/s, 8 memory channels total; DPDK 1.4.0-22 *Projection data on 2 sockets extrapolated from 1S run on Wildcat Pass system with E5-2699 v3.
  • 37. 37 DPDKGenerationalPerformanceGains Disclaimer: Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/performance. IPV4 L3 Forwarding Performance of 64Byte Packets * Other names and brands may be claimed as the property of others. Broadwell EP System Configuration Hardware Platform SuperMicro® - X10DRX CPU Intel® Xeon® Processor E5-2658 v4 Chipset Intel® C612 chipset Sockets 2 Cores per Socket 14 (28 threads) LL CACHE 30 MB QPI/DMI 9.6GT/s PCIe Gen3x8 MEMORY DDR4 2400 MHz, 1Rx4 8GB (total 64GB), 4 Channel per Socket NIC 10 x Intel® Ethernet CNA XL710-QDA2PCI-Express Gen3 x8 Dual Port 40 GbE Ethernet NIC (1x40G/card) NIC Mbps 40,000 BIOS BIOS version: 1.0c (02/12/2015) Software OS Debian 8.0 Kernel version 3.18.2 Other DPDK2.2.0 55 80.1 164.9 255 279.9 346.7 0 50 100 150 200 250 300 350 400 2010 (2S WMR) 2011 (1S SNB) 2012(2S SNB) 2013 (2S IVB) 2014 (2S HSW) 2015 (2S BDW) L3FwdPerformance(MPPS) Year 37 Gbps 53.8 Gbps 110.8 Gbps 171.4 Gbps 187.2 Gbps 233 Gbps 2010 (2S WMR) 2011 (1S SNB) 2013 (2S IVB) 2012 (2S SNB) 2015 (2S BDW) 2014 (2S HSW)
  • 38. Software Router examples • "Now we get 10G line rates per core with 64-byte packets and linear performance as we add cores," said Herrell. "System shipping today deliver almost 200G throughput for a two-socket server, and in the routing/firewall world that is shocking because it replaces $100,000 proprietary boxes.“ Brocade Vyatta • This document explains how Sandvine, Dell®, and Intel®, using standards- based virtualization technologies, have achieved data plane performance scale : 1.1 Tbps (using realistic traffic, including diverse protocols, encapsulation and tunneling) • Achieving 100Gbps Performance at Core with Poptrie and Kamuee Zero: NTT communication
  • 39. 39 Cache Monitoring Technology (CMT) • Identify “noisy neighbors”, misbehaving or cache-starved applications and reschedule according to priority • Cache Occupancy reported on per Resource Monitoring ID (RMID) basis Last Level Cache Core 0 Core 1 Core n …..Hypervis or VSwitch Last Level Cache Core 0 Core 1 Core n ….. Cache Allocation Technology (CAT) • Last Level Cache partitioning mechanism enabling the separation of applications, threads, VMs, etc. • Misbehaving threads can be isolated to increase performance determinism Cache Monitoring and Allocation Technologies improve cache visibility and run-time determinism Intel® RDT: CMT & CAT App VSwitch Hypervis or App
  • 40. Top Networking Challenges Seen in IT Source: TechTarget's 2015 purchasing intentions survey 40 56%Improving Network Security 44%Need More Bandwidth 34%Network Virtualization 33%Aligning IT and Corporate Goals 30%Moving Applications to the Cloud 29%Ensuring Applications Run Optimally 27%BYOD Related Access and Policy Concerns Intel is committed to helping solve these critical challenges
  • 41. Virtualisation Technology Orchestration Infrastructure Layer / Data Plane Intel Architecture NFV/SDN Accelerators VT-d SR-IOV Virtual Machine Monitor(VMM)/Hypervisor OpenStack L2 VNF Applianc e L2 VNF Applianc e L3 VNF Applianc e Control Plane OpenContrail Open Daylight ONOS DPDK DPDK DPDKVirtual NIC VMDq NIC Silicon NIC Silicon QAT Chipset Acceleration Hyperscan KVM XEN HYPER-V QEMU Virtual NIC Virtual NIC Microsoft azure RDT IA CPU NIC Silicon Virtual Switch Amazon EC2 L3 VNF Applianc e DPDK Virtual NIC Security VNF Applianc e DPDK Virtual NIC DPDK V Fd.io Legopus Open vSwitch POF OpenSwitch BESS DPDK Virtual Switch CloudStack Open Shift Google Compute Engine Security VNF Applianc e DPDK Virtual NIC VMM/ Hypervisor
  • 42. 42 DPDKAccelerationEnhancements DPDK API Traffic Gens Pktgen, T-Rex, Moongen, … vSwitch OVS, Lagopus, … DPDK example apps AES-NI Future features Event based program models Threading Models lthreads, … Video Apps EAL MALLOC MBUF MEMPOOL RING TIMER Core Libraries KNI POWER IVSHMEM Platform LPM Classifica tion ACL Classify e1000 ixgbe bonding af_pkt i40e fm10k Packet Access (PMD) ETHDEV xenvirt enic ring METER SCHED QoS cxgbe vmxnet3 virtio PIPELINE mlx4 memnic others HASH Utilities IP Frag CMDLINE JOBSTAT KVARGS REORDER TABLE Legacy DPDK Future accelerators Crypto Programmable Classifier/Parser HW 3rd Party GPU/FPGA 3rd Party SoC PMD External mempool manager SoC HW SOC model VNF Apps DPDK Acceleration Enhancements DPDK Framework Network Stacks libUNS, mTCP, SeaStar, libuinet, TLDK, … Compression 3rd Party HW/SW IPSec DPI Hyperscan Proxy Apps, …
  • 43. 43 DPDKpathtocommunities,vendors,SP’s Major Contributors to DPDK Open Source Software Fully open source software project with a strong development community: http://dpdk.org BSD licensed DPDK is available as part of the following OS distributions: Version 6 + Version 7.1 + Version 7.1 & higher Version 15.10 + Version 10.1 + Version 22 + Open Source projects based on DPDK mTCP, Seastar, Pktgen, Netflow, many more see backup Intel® ONP End Customers Vendors Intel® Network Builders Fostering a vibrant ecosystem to lead the network transformation of tomorrow https://networkbuilders.intel.com/solutionscatalog FD.IO members
  • 44. Telcocloud-EndToendAgility distributed,local,automated 44 Automated Infrastructure Infrastructure Management and Orchestration Optimized Workload Placement Security Policy and Lifecycle Automation RESTful interfaces Telco Cloud Automated Service Level Agreement Unified Management Plan Services Management & Orchestration VNF - MEC VNF - VRAN VNF - IOTVNF - EPC Infrastructure Orchestration Software Services Delivery Modernize and Virtualize System Architecture 4:1 Workload Consolidation Intel® VT + NFV Optimized Platforms Resource Pool Storage Network Compute Infrastructure Attributes Power Performance Security Thermals Utilization Location
  • 47. Intel® Confidential — INTERNAL USE ONLY Scale Program/ Community development collaboration 47 • Logos are at approx. position on Enablement Framework Joint Path- finding/ Discovery Optimizations PoC/ Trials/ Deployments TOTAL: 117 Influenced partners optimizing their solutions on Intel CPU & other ingredients this year TOTAL: 72 Development/selection of End-End NFV solution use cases with multiple partners TOTAL: 33 Deployed: 22 Meet ups act as bringing in new partners and let them discover SDN/NFV ingredients on IA Hands on Training/ IDZ collatorals and IEMs help support optimization Continued help usingHands on Training/ IDZ collatorals and IEMs help support individual deployments
  • 48. Intel® Confidential — INTERNAL USE ONLY Model 2017 to help fast track ISV solutions Continue to Win developer mindshare to increase adoption of IA in the NFV/SDN space by deepening engagements via • Live training, • Active innovators • Dev Mesh projectsDPDK Quick Assist Open Stack VTune SDN/NFV Forum Open vSwitch RDT Fd.io SR-IOV VMDq VT-D Intel Innovator Dev Mesh Live training ISV support IDZ 1300 + members  2000 + Developers trained worldwide From 146 companies  8 active innovators  12 Dev Mesh Projects 8000+ organic users every month
  • 49. Intel® Confidential — INTERNAL USE ONLY YourRoleINTheCommunity 49
  • 50. Intel® Confidential — INTERNAL USE ONLY 50
  • 51. Intel® Confidential — INTERNAL USE ONLY • Visa payment system PoC • Wipro: System Integrator (40 SMEs) • In memory data base called aero spike 9 ( 9 node cluster) • Cisco UCS 460 which has 96 Intel cores for the horse power with DPDK to manage (5 node cluster) • Non volatile memory and flash drive • Requirement: Network scalability from 10G to 40G. Ability to process messages with 13 ms Low latency with high throughput of 15,000 credit card transactions per sec • Project Status: • Started in July with Developers trained via DPDK /NFV dev lab on July 11th. • Scheduled to deploy for evaluation by VISA by October end 2016 • Scale of Deployment : Worldwide • Wipro Case Study Scale Community Engagement Dev Trained DPDK/NFV Hands on July 11th Continued participation in meet ups : IDZ collaterals for BKM s Emails support with Scale team Training feedback from Wipro Program Manager [Ashish]The highlight of the training was hands-on approach to get familiar with DPDK. This is critical for success of such initiatives and I understand that this requires lot of planning (infrastructure setup). I hope that Intel continues to organize such trainings to educate developers. [Ashish]The community and experts has been helpful and we would reach out for more discussions.
  • 52. Intel® Confidential — INTERNAL USE ONLY What are Networking Innovators doing right now? Dharani Vilwanathan (Dev Lab winner) Project: PerfectStream: A DPDK-based Video Gateway About the Project: PerfectStream is primarily a Video Gateway that receives multiple streams, stores the feed and/or relays the feed as needed in the way the client prefers.
  • 53. Intel® Confidential — INTERNAL USE ONLY What are Networking Innovators doing right now? Shivaram Mysore Project: Deploying SDN Wired/Wireless Network About the project: Faucet enables replacement of legacy L2/L3 Network switch with SDN functionality. Here OVS + DPDK based on Intel x86 white box is used as the data plane (switch) with Faucet Controller managing the same.
  • 54. Intel® Confidential — INTERNAL USE ONLY What are our Innovators doing right now? Sridhar Pitchai Project: DPDK datapath for control plane traffic Objective: Implement DPDK based data path to bypass Kernel IP stack for packets punted to CPU from vendor chip based fast path. The FlexSwitch NOS is currently running in some of the world’s most demanding networks with the same architectural model that has been proven by Facebook, Amazon, and others.
  • 55. Intel® Confidential — INTERNAL USE ONLY Conclusions Network Function Virtualization and Software Defines networking promises and is transforming the industry by moving network functions from fixed function ASICs to commodity hardware • The answer to scalable and performant virtualization is to use software for agility and hardware offloads for defined work loads • Intel is working with the eco system to define best solutions both in software and hardware to enable the eco system • DPDK is an example of successful open source project that helps the Industry implement Packet processing on x86 based platforms • We are here to help you in your work and would like to help you 55
  • 56. Intel® Confidential — INTERNAL USE ONLY Thankyou 56 sujata.tibrewala@intel.com @sujatatibre
  • 57. Intel® Confidential — INTERNAL USE ONLY Meet up partnerships Bay Area (Total dev reach ~ 10000+) Partner-> Members: 93 Geo: San Jose http://www.meetup.com/sbysdnnfvcloud/ Partner Members: 809 Geo: Santa Clara http://www.meetup.com/SDN-Switching-Group/ Intel Developer Zone meet up Members: 1300+ Geo: Santa Clara http://www.meetup.com/Out-Of-The-Box-Network-Developers <- Partner Members: 5739 Geo: San Francisco http://www.meetup.com/openstack/ Partner -> Members: 2851 Geo: Santa Clara http://www.meetup.com/openvswitch
  • 58. Intel® Confidential — INTERNAL USE ONLY IDZ Meet ups Worldwide Partner-> Members: 645 Geo: Dublin https://www.meetup.com/OpenStack-Ireland/ Members: 264 Geo: Bangalore https://www.meetup.com/SDN-NFV-Meetup/ Members: 67 Geo: Portland https://www.meetup.com/Out-Of-The-Box-Network-Developers-PDX/
  • 59. Intel® Confidential — INTERNAL USE ONLY External Developer Events Meet Ups TCS SDN/NFV 1 Day event Sep 2016 Women who Code Vmware Dpdk open vswitch hands on, Sep 2016 DPDK summit Bangalore April 2017

Notas del editor

  1. Place at the back of the deck
  2. Importance of Physical and Virtual product architecture consistency Market ideally requires the same software running across both the physical and virtual appliance architectures in order to a deliver consistent, scalable solution Network response Feature and functionality Performance scaling Common management and provisioning framework Ease of deployment Today, many vendors are offering hybrid solutions (Physical/Virtual) to meet virtualized performance requirements Architectural consistency enables a significantly more cohesive and streamlined solution offering
  3. Intel influencing the transformation through a 4-part strategy (This is what SDND is all about  - SDND Enables the transformation) 4 elements of the strategy feed each other are creating a strong foundation for the industry to leverage on. Advance Open Source Open Standards                Promote and contribute to industry standards and open source solutions for interoperability                Committed to “Open” standards for a competitive market 2.  Deliver Open reference Designs Leading performance, security, open source software and reference designs Enable industry leading manageability by exposing health, state, resource availability for optimal workload placement and configuration 3.  Enable Open Ecosystem on IA Enable TEMs/OEMs to deliver industry leading performance,  power, cost, security optimized solutions 4.  Collaborate on trials and deployments      Building solution experience with leading Enterprise, Telco and Cloud Service Providers and Vendors    
  4. The left side of the diagram shows what SDN means and the left shows what NFV means.
  5. In Server virtualization, the hardware is abstracted by a hypervisor and virtual resources are presented to guest operating systems Compute and storage resources are virtualized, pooled and provisioned automatically. Similarly, Network virtualization technology allows abstraction of the network, virtual networks can be created as an overlay on top of existing physical infrastructure. Virtual network resources can be pooled and dynamically created and provisioned any time anywhere. The virtual network configuration & policies can be stored and restored like any other virtual compute or storage resources. VM mobility can be supported across subnets and geographies. This is very similar to VPN technology, where a user is able create overlays over Internet to connect anytime anywhere to corporate networks. Network virtualization uses tunneling technology to tunnel Ethernet traffic over existing IP networks. E.g. NVGRE, VXLAN tunneling Currently network virtualization is implemented in software virtual switches in hypervisors. E.g. Open vSwitch (OVS), VMware vDS, Hyper-V virtual switch. Gateway devices are used to bridge to legacy networks that do not support Network virtualization. Gateway function can be implemented in software on standard servers or in HW switches (e.g. Top of Rack switches). Network virtualization Control/management functionality is performed by SDN controller (a.k.a. Network virtualization controller or network hypervisor), e.g. VMware NSX, Open Daylight(ODL), MS Hyper-V Network Virtualization, open virtual network (OVN), virtual networking as a service provided as part of Openstack. The Network virtualization technology also provides complete isolation from other virtual networks running concurrently on the same physical network infrastructure, for example a company ABC (or department ABC) virtual network coexists with a company XYZ (or department XYZ) virtual network collocated in the same public (or private) cloud infrastructure. So isolation, data privacy and service assurance are very important in these deployments.
  6. In current data centers network services such as firewall, IDS, load balancers are implemented in specialized boxes that are deployed typically in the network infrastructure. Multiple such services are chained in the physical network infrastructure. A customer moving to public or private cloud expects to have the same services available on virtual networks as well. A data center operator or service provider wants to offer these L4-L7 applications as a service to their customers. The network services can be virtualized and run on VMs alongside business applications; these are also called as VNFs (virtual network functions) in ETSI reference architecture. With SDN and NFV technologies, network services can be provisioned when VMs are provisioned and the traffic can be dynamically rerouted to flow through virtual appliances (VNFs) before reaching business applications. Forwarding traffic through multiple network service functions is also called as service chaining. So network services can be virtualized, pooled and automatically provisioned and available within a virtual network for use by a tenant.
  7. An enterprise or medium business that gets a pool of compute resources in cloud want to create a virtual network and also want to have network services like VPN, Firewall, IDS/IPS, Load balancers provisioned within the virtual network. Network service overlay created over virtual overlay networks enables to create, provision network functions (services) and transport data between the network services. Service Function chaining enables forwarding data through multiple network services chained in a service path before delivering to the VM running business applications. Traffic arriving into the virtual network is classified by a Service Classifier, that determines the Service Path and a Service Function Forwarder (SFF could be implemented in a virtual switch) in the service plane forwards packet data to the next service function in the chain before delivering it to VM/applications. A service orchestrator composes service chains/service graphs depending on the policies configured by the administrator. Network services Header (NSH) is a protocol used in a service plane for carrying meta data between services, service path identifier that enables creation of a service chain and forwarding between the services. NSH header is transport independent and can be carried over any overlay transport protocol like VXLAN, VXLAN-GPE, GRE, etc.,
  8. 18
  9. Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space. Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network. They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs. Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software. Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley. Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies. We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are. Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public. A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
  10. Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space. Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network. They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs. Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software. Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley. Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies. We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are. Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public. A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
  11. Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space. Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network. They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs. Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software. Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley. Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies. We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are. Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public. A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
  12. Can be physical or virtual hardware (i.e. networking stack in VM). In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing. The processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is temporary, and, after the interrupt handler finishes, the processor resumes normal activities.[1] There are two types of interrupts: hardware interrupts and software interrupts. Receive ring buffers are shared between the device driver and NIC. The card assigns a transmit (TX) and receive (RX) ring buffer. As the name implies, the ring buffer is a circular buffer where an overflow simply overwrites existing data. There are two ways to move data from the NIC to the kernel, hardware interrupts and software interrupts, also called SoftIRQs. The RX ring buffer is used to store incoming packets until they can be processed by the device driver. The device driver drains the RX ring, typically via SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or “skb” to begin its journey through the kernel and up to the application which owns the relevant socket. The TX ring buffer is used to hold outgoing packets which are destined for the wire. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance.
  13. STT - https://tools.ietf.org/html/draft-davie-stt-01 VXLAN - http://www.rfc-editor.org/rfc/rfc7348.txt Geneve - https://datatracker.ietf.org/doc/draft-gross-geneve/ VXLAN-GPE - http://www.ietf.org/archive/id/draft-quinn-vxlan-gpe-03.txt NSH - https://datatracker.ietf.org/doc/draft-quinn-sfc-nsh/ GRE - http://tools.ietf.org/html/rfc2890 NVGRE - https://datatracker.ietf.org/doc/draft-sridharan-virtualization-nvgre/ OVS supported tunnel protocols - GRE, VXLAN, IPsec, GRE and VXLAN over Ipsec NSX vswitch - VXLAN, STT, GRE
  14. Top Message: Intel is committed to help solve the key networking challenges as highlighted by the top IT professional in the recent Tech Targets Survey. Suggested Text: This slide shows data from a recent TechTarget purchasing intentions survey that describes and also ties back to previous slide on the biggest challenges IT faces today. More than half, or 56% of the 1,560 networking pros worldwide polled in the SearchNetworking study, identified Network Security as their main hurdle.  The need for more bandwidth ranked as the second-biggest challenge this year, with 44% of respondents citing it as one of their main obstacles.  Ensuring applications run optimally was at 29% which highlights the need to give application reliability and security. The focus of this deck is to cover the following three areas in CCV model and tie back to the relevant solutions. Improving Network Security Addressing Need for Bandwidth Application Delivery Optimization [transition sentence] We will start with the Security section in the next slide…
  15. Keith Wiles will introduce us to what DPDK means, the core APIs, how it is used in NFV space. Rashmin and Rahul will extensively talk about ho we are enabling Network Resources to be Virtualized in the Network. They will talk about how technologies like VT-D and SR-IOV enable packets to get faster from the NIC all the way up the VMs. Irene, Keith, Ashok and Clayne will talk about how software virtualization works. They will talk about what virtual io means and how Open vswitch uses it to switch packets up to the VM in software. Sangjin and Josh will talk about BESS which is another open source soft switch initiative started by Berkeley. Each of the sessions will be followed by Hands on sessions or a code walk through to give you a head start on working with these technologies. We will end the session with performance optimization tips from MJ and how to use open source tools and Vtunes to get a sense of where your performance bottle necks are. Last but not the least, Georgi will talk about how we at Intel Benchmark dpdk and report the performances in public. A disclaimer, not everything is shared here because not everyone is under NDA, but we are sharing whatever is public domain knowledge.
  16. 44
  17. Ashok will talk about ONP. ONP is a server reference architecture that brings together key hardware and open software. These ingredients are optimized for network functions virtualization (NFV), and software-defined networking (SDN) in Telecom, Enterprise, and Cloud markets.