SlideShare una empresa de Scribd logo
1 de 41
Descargar para leer sin conexión
Cloud
Monitors
Cloud
by:
Raymond Burkholder
for:
Calgary UNIX Users Group
on:
April 23, 2019
I am:
Raymond Burkholder
In and Out of:
Software Development
Linux Administration
Network Management
System Monitoring
raymond@burkholder.net
ray@oneunified.net
https://blog.raymond.burkholder.net
Cloud Monitors Cloud
Upstream 1
Upstream 2
Cloud01
Cloud02
Cloud03
Monitoring Cloud Monitored Cloud
Items To Talk About
● Virtualization
● Redundancy & Resiliency
● Networking
● Firewall
● Connectivity
● Open Source Tools:
– Iproute2 – kernel tools for building sophisticated connections
– Open vSwitch -- for layer 2 switching, firewalling
– Free Range Routing -- layer 2/3 route distribution with BGP, EVPN, anycast
– LXC -- containers – lighter weight than Docker
– Nftables – successor to iptables for ACL with connection tracking
– SaltStack – living documentation, automation, orchestration
Over All Goals: a) total remote access, b) total re-creation of solution via automation
Monitoring Replica – Cloud ‘nn’
nftables
dnsmasqcache-ng
saltcheck_mksmtp
Free Range Routing
Open vSwitch
Console Serial Connections
Cloud01 Cloud03Cloud02
Console Server Console Server
PDU
PDU
MellanoxSw.
MellanoxSw.
Host HostStorage StorageHost
Dual Console Servers for Diagnostics - Side A & Side B
Ethernet Management
Cloud01 Cloud03Cloud02
Console Server A
PDUA
PDUB
MellanoxSw.A
MellanoxSw.B
Host HostStorage StorageHost
Console Server B
Ethernet Management Ports distributed across Cloud interfaces
[any Cloudxx can get to any other’s serial interface via one of two console servers]
Hand in Hand
● eBGP vs iBGP
– Multiple ASNs vs Single ASN (eBGP used in this installation)
● VxLAN vs LAN
– 16 million encaps vs 4000 encaps
– VXLAN, also called virtual extensible LAN , is designed to provide
layer 2 overlay networks on top of a layer 3 network by using MAC
address-in-user datagram protocol (MAC-in-UDP) encapsulation.
In simple terms, VXLAN can offer the same services as VLAN
does, but with greater extensibility and flexibility.
● aka EVPN via MP-BGP (enhanced VPN via Multi-Protocol BGP) used
for auto-distribution of VxLAN MAC/IP
Layer 2 is cocaine. It has never been right — and yet people keep packaging it in various ways and
selling it’s virtues and capabilities. -- @trumanboyes
Light vs Heavy Virtualization
● LXC – (Linux Containers) is an operating-system-level
virtualization method for running multiple isolated Linux systems
(containers) on a control host using a single Linux kernel.
● KVM - (Kernel-based Virtual Machine) is a full virtualization
solution for Linux on x86 hardware containing virtualization
extensions ... that provides the core virtualization
infrastructure ... where one can run multiple virtual machines
running unmodified Linux or Windows images. Each virtual
machine has private virtualized hardware: a network card, disk,
graphics adapter, etc.
Virtualization Selection
● Since no customer applications are running on the
management cloud hosts, light virtualization in the form of LXC
containers is used
● Goal is to keep the base host install as plain and simple as
possible – all services and management functionality should be
segregated into individual containers
● Containers, and their configurations can then be destroyed and
rebuilt at will as bugs and upgrades require
Containers
● pprx0[1-3] – apt-cacher-ng – package proxy/caching
● edge0[1-2] – edge router
● fw0[1-2] – firewall
● nacl0[1-3] – salt stack master
● bind0[1-3] – dns/bind external resolution
● dmsq0[1-3] – dnsmasq – internal dns, dhcp, pxeboot, tftp
● cmk0[1-3] – check_mk (nagios wrapper)
● smtp0[1-3] – email server, notifications
One Physical Instance
Public
Addressing
Private
Addressing
EDGE
FW
DMSQ
Customer
Cloud
INTERNET
PPRX
SSH/VPN
SMTP
BIND
NACL
CMK
Containers with inter-container routing
Some services/containers should not be directly connected to ‘outside’ world, and should
instead be proxied via service specific intermediaries.
Resiliency
● Choices:
– Consul (dns for service resolution)
● Require heartbeats and for each service type
– HAProxy (layer 3 load balancing – userland)
● Overkill for service load type
– IPVS (l2 kernel based load balancing)
● Only local to the machine
– BGP AnyCast (routing based load distribution)
● Proven routing based resiliency
AnyCast
● Add Container Unique Loopback Address
● Add Service Common Loopback Address – advertised into BGP
by each common service container
● When container dies, common loopback address disappears.
● Loopback addresses are weighted in BGP so local services use
local services in preference
Host Functions
● Host functions are minimized. Management functions relegated
to containers
● Host has main BGP router, connects to BGP instances of each
of the other two hosts
● Configured to handle the VxLAN/EVPN MAC/IP advertisements
to/from each container
● Keeps container traffic ‘segregated’ from host ‘native’ routing
tables – virtualizes networking within and across the hosts
eBGP
● Next set of slides show eBGP routing tables to show the
resiliency created by routing.
● A non-production two-cloudbox is shown as an example
host01.ny1 neighbors
host01.ny1# sh ip bgp sum
IPv4 Unicast Summary:
BGP router identifier 10.20.1.1, local AS number 64601 vrf-id 0
BGP table version 62
RIB entries 55, using 8360 bytes of memory
Peers 9, using 174 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
host02.ny1(10.20.3.2) 4 64602 100218 100229 0 0 0 07w4d05h 18
pprx01.ny1(10.20.5.11) 4 64701 100132 100147 0 0 0 09w6d12h 2
nacl01.ny1(10.20.5.12) 4 64702 100139 100157 0 0 0 09w6d06h 2
ntp01.ny1(10.20.5.13) 4 64705 100132 100148 0 0 0 09w6d12h 2
dmsq01.ny1(10.20.5.14) 4 64703 100133 100149 0 0 0 09w6d12h 2
bind01.ny1(10.20.5.15) 4 64706 100133 100150 0 0 0 09w6d12h 2
prxy01.ny1(10.20.5.17) 4 64704 100132 100146 0 0 0 09w6d12h 2
smtp01.ny1(10.20.5.18) 4 64707 100132 100145 0 0 0 09w6d12h 2
fw01.ny1(10.20.5.19) 4 64708 100130 100148 0 0 0 09w6d12h 1
Total number of neighbors 9
host01 has private ASN 64601, host02 has ASN 64602
host02.ny1 neighbors
host02.ny1# sh ip bgp sum
IPv4 Unicast Summary:
BGP router identifier 10.20.1.2, local AS number 64602 vrf-id 0
BGP table version 54
RIB entries 55, using 8360 bytes of memory
Peers 9, using 174 KiB of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
host01.ny1(10.20.3.3) 4 64601 100233 100223 0 0 0 07w4d05h 18
pprx02.ny1(10.20.6.11) 4 64801 100135 100145 0 0 0 09w6d12h 2
nacl02.ny1(10.20.6.12) 4 64802 100135 100145 0 0 0 09w6d12h 2
ntp02.ny1(10.20.6.13) 4 64805 100135 100145 0 0 0 09w6d12h 2
dmsq02.ny1(10.20.6.14) 4 64803 100135 100146 0 0 0 09w6d12h 2
bind02.ny1(10.20.6.15) 4 64806 100136 100147 0 0 0 09w6d12h 2
prxy02.ny1(10.20.6.17) 4 64804 100135 100145 0 0 0 09w6d12h 2
smtp02.ny1(10.20.6.18) 4 64807 100135 100144 0 0 0 09w6d12h 2
fw02.ny1(10.20.6.19) 4 64808 100134 100145 0 0 0 09w6d12h 1
Total number of neighbors 9
Containers on host01 have private ASN 647xx, host02 containers use ASN 648xx
host01.ny1 loopbacks view A
host01.ny1# sh ip bgp
BGP table version is 62, local router ID is 10.20.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
*> 10.20.1.1/32 0.0.0.0 0 32768 ?
*> 10.20.1.2/32 10.20.3.2 0 0 64602 ?
*> 10.20.1.17/32 10.20.5.11 0 0 64701 ?
*> 10.20.1.18/32 10.20.5.12 0 0 64702 ?
*> 10.20.1.19/32 10.20.5.14 0 0 64703 ?
*> 10.20.1.20/32 10.20.5.17 0 0 64704 ?
*> 10.20.1.21/32 10.20.5.13 0 0 64705 ?
*> 10.20.1.22/32 10.20.5.15 0 0 64706 ?
*> 10.20.1.23/32 10.20.5.18 0 0 64707 ?
*> 10.20.1.24/32 10.20.5.19 0 0 64708 ?
*> 10.20.1.33/32 10.20.3.2 0 64602 64801 ?
*> 10.20.1.34/32 10.20.3.2 0 64602 64802 ?
*> 10.20.1.35/32 10.20.3.2 0 64602 64803 ?
*> 10.20.1.36/32 10.20.3.2 0 64602 64804 ?
*> 10.20.1.37/32 10.20.3.2 0 64602 64805 ?
*> 10.20.1.38/32 10.20.3.2 0 64602 64806 ?
*> 10.20.1.39/32 10.20.3.2 0 64602 64807 ?
*> 10.20.1.40/32 10.20.3.2 0 64602 64808 ?
... on next slide
Loopbacks 10.20.1.x/32 are unique per container
Containers on
host01 are seen
as local hops
Containers on host02
are seen as two hops
away via host02
host01.ny1 loopbacks view B
* 10.20.2.101/32 10.20.3.2 0 64602 64801 ?
*> 10.20.5.11 0 0 64701 ?
* 10.20.2.102/32 10.20.3.2 0 64602 64802 ?
*> 10.20.5.12 0 0 64702 ?
* 10.20.2.103/32 10.20.3.2 0 64602 64803 ?
*> 10.20.5.14 0 0 64703 ?
* 10.20.2.104/32 10.20.3.2 0 64602 64804 ?
*> 10.20.5.17 0 0 64704 ?
* 10.20.2.105/32 10.20.3.2 0 64602 64805 ?
*> 10.20.5.13 0 0 64705 ?
* 10.20.2.106/32 10.20.3.2 0 64602 64806 ?
*> 10.20.5.15 0 0 64706 ?
* 10.20.2.107/32 10.20.3.2 0 64602 64807 ?
*> 10.20.5.18 0 0 64707 ?
* 10.20.3.2/31 10.20.3.2 0 0 64602 ?
*> 0.0.0.0 0 32768 ?
*> 10.20.5.0/24 0.0.0.0 0 32768 ?
*> 10.20.6.0/24 10.20.3.2 0 0 64602 ?
Displayed 28 routes and 36 total paths
Loopbacks 10.20.2.x/32 are unique per service
Service loopbacks are
seen on two separate
containers
on two different hosts
with the local container taking
precedence
Switches/Routers
Desktop Lanner generic router/switch/compute
for Management Cloud
Mellanox Wire Speed Switching
for Customer Cloud
Mutual Managmement
1G Management PXE Boot 10G Traffic Interchange
Cloud 1 pxeboots off Cloud 2, Cloud 2 pxeboots off Cloud 3, and Cloud 3 pxeboots off Cloud 1
Solves the one management/cloud-box issue of mutual reboot/rebuild/reload
Cloud To Cloud – Traffic Interchange Redundancy
Salt
● Event-Driven IT Automation Software
● Infrastructure as Code (Self Documenting)
● Amongst other things:
– State files
– Pillar files
– Event Orchestration
Salt Layout
# pwd
/srv
# ls -alt
total 48
drwxr-xr-x 1 root root 146 Feb 13 00:12 ..
drwxr-xr-x 1 root root 204 Nov 1 23:35 .git
drwxr-xr-x 1 root root 830 Jul 1 2018 salt
drwxr-xr-x 1 root root 338 Jul 1 2018 pillar
drwxr-xr-x 1 root root 242 May 24 2018 .
drwxr-xr-x 1 root root 10 May 11 2018 reactor
Salt State Files
# ls salt
apc dhcp-relay interface ntpd sheepdog user
apt diagnosis ipmi opensmtpd smartmontools users
apt-cacher-ng dnsmasq iptables openvswitch squid util
apt-mirror frr ipv6 orchestrate ssh vim
bash git keepalived ovs_ni sshd virt-manager
bind9 highstate libvirt _packages strongswan virt-what
bonding hostapd lldpd resolv sudo vrrpd
bridge hostname lxc root sysctl zfs
cmk ifb mapvlans rsyslog systemd
default ifplugd netbox_ipam salt tmux
dhclient ifupdown2 nftables sensors top.sls
A Configuration may require a combination of many different services
Some Pillar Files (YAML)
/srv# ls -l pillar/net/example/ny1/
-rw-r--r-- 1 root root 4998 May 11 2018 checklist.txt
-rw-r--r-- 1 root root 1320 May 24 2018 dnsmasq.sls
drwxr-xr-x 1 root root 234 Mar 3 16:10 host01
-rw-r--r-- 1 root root 6385 Jul 1 2018 host01.sls
drwxr-xr-x 1 root root 218 May 24 2018 host02
-rw-r--r-- 1 root root 6272 May 24 2018 host02.sls
-rw-r--r-- 1 root root 731 May 11 2018 lxc.sls
drwxr-xr-x 1 root root 40 May 11 2018 smtpd
-rw-r--r-- 1 root root 1057 May 16 2018 vni.sls
/srv# ls -l pillar/net/example/ny1/host01/
-rw-r--r-- 1 root root 2035 May 11 2018 bind01.sls
-rw-r--r-- 1 root root 2624 May 24 2018 cmk01.sls
-rw-r--r-- 1 root root 243 May 11 2018 cmk-agent.sls
-rw-r--r-- 1 root root 2769 May 11 2018 dmsq01.sls
-rw-r--r-- 1 root root 4741 May 24 2018 edge01.sls
-rw-r--r-- 1 root root 4357 May 24 2018 fw01.sls
-rw-r--r-- 1 root root 3320 May 11 2018 nacl01.sls
-rw-r--r-- 1 root root 2266 May 11 2018 ntp01.sls
-rw-r--r-- 1 root root 2698 May 11 2018 pprx01.sls
-rw-r--r-- 1 root root 2693 May 11 2018 prxy01.sls
-rw-r--r-- 1 root root 2308 May 11 2018 smtp01.sls
YAML: Yet Another Meta Language
top.sls (salt/pillar)
# salt
base:
'*.example.net':
- apt.sources
- apt.common
- default.networking
- systemd.timesyncd
- sshd
- ntpd
- root
- resolv
fw0?.ny1.example.net:
- ipv6
- hostname
- sudo
- bash
- vim
- sysctl.routing
- users
- frr
- ifupdown2
- sshd.ifupdown2
- nftables
- cmk.agent
.....
# pillar
base:
'*':
- services.ntp
fw01.ny1.example.net:
- net.example.ny1.host01.fw01
- users
fw02.ny1.example.net:
- net.example.ny1.host02.fw02
- users
.....
All Salt state files defined in salt/top.sls
All Salt pillar files defined in pillar/top.sls
Example 1 - Salt State File - nftables
nftables-packages:
pkg.installed:
- pkgs:
- nftables
- iptstate
- netstat-nat
- pktstat
- tcpdump
- traceroute
- ulogd2
- conntrack
- conntrackd
- net-tools
service_nftables:
service.enabled:
- name: nftables
{% set target = "/etc/nftables.conf" %}
{{ target }}:
file.managed:
- source: salt://nftables/firewall.py.nft
- template: py
- mode: 644
- user: root
- group: root
- require:
- pkg: nftables-packages
nftables-packages:
pkg.installed:
- pkgs:
- nftables
- iptstate
- netstat-nat
- pktstat
- tcpdump
- traceroute
- ulogd2
- conntrack
- conntrackd
- net-tools
service_nftables:
service.enabled:
- name: nftables
{% set target = "/etc/nftables.conf" %}
{{ target }}:
file.managed:
- source: salt://nftables/firewall.py.nft
- template: py
- mode: 644
- user: root
- group: root
- require:
- pkg: nftables-packages
apply_nft:
cmd.run:
- name: /usr/sbin/nft -f {{ target }}
- runas: root
- require:
- file: {{ target }}
- pkg: nftables-packages
- onchanges:
- file: {{ target }}
apply_nft:
cmd.run:
- name: /usr/sbin/nft -f {{ target }}
- runas: root
- require:
- file: {{ target }}
- pkg: nftables-packages
- onchanges:
- file: {{ target }}
Peppered with Jinja2 templating
Salt state file is essentially a sequence
of recipes for defining/building a
particular service
Install Packages
Ensure service is running
Build Configuration file
Apply the configuration file
Nftables YAML to Config to Runningpolicy:
local-private:
from: local
to: private
default: accept
local-public:
from: local
to: public
default: accept
private-local:
from: private
to: local
default: accept
private-public:
from: private
to: public
default: drop
public-private:
from: public
to: private
default: drop
policy:
local-private:
from: local
to: private
default: accept
local-public:
from: local
to: public
default: accept
private-local:
from: private
to: local
default: accept
private-public:
from: private
to: public
default: drop
public-private:
from: public
to: private
default: drop
public-local:
from: public
to: local
default: drop
rule:
# salt clients
- proto: tcp
saddr:
- 192.168.195.100
- 172.16.42.192/27
- 172.16.43.192/27
- 172.16.42.224/28
- 172.16.43.224/28
dport:
- 4505
- 4506
public-local:
from: public
to: local
default: drop
rule:
# salt clients
- proto: tcp
saddr:
- 192.168.195.100
- 172.16.42.192/27
- 172.16.43.192/27
- 172.16.42.224/28
- 172.16.43.224/28
dport:
- 4505
- 4506
# exerpt from /etc/nftables.conf:
add chain ip filter public_local
add rule ip filter public_local tcp dport {4505,4506} ip saddr
{192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept
add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept
add rule ip filter public_local tcp dport 22 accept
add rule ip filter input iifname eth443 goto public_local
add rule ip filter public_local iifname eth443 counter goto loginput
add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop
# exerpt from /etc/nftables.conf:
add chain ip filter public_local
add rule ip filter public_local tcp dport {4505,4506} ip saddr
{192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept
add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept
add rule ip filter public_local tcp dport 22 accept
add rule ip filter input iifname eth443 goto public_local
add rule ip filter public_local iifname eth443 counter goto loginput
add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop
# exerpt from nft list ruleset:
chain public_local {
tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept
tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept
tcp dport ssh accept
iifname "eth443" counter packets 155364 bytes 7981388 goto loginput
log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop
}
# exerpt from nft list ruleset:
chain public_local {
tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept
tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept
tcp dport ssh accept
iifname "eth443" counter packets 155364 bytes 7981388 goto loginput
log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop
}
- proto: tcp
saddr:
- 192.168.195.100
sport:
- 4505
- 4506
# ssh from anywhere
- proto: tcp
dport: 22
- proto: tcp
saddr:
- 192.168.195.100
sport:
- 4505
- 4506
# ssh from anywhere
- proto: tcp
dport: 22
A simple zone based firewall configuration in YAML in the pillar file
Excerpt from the auto-generated configuration file, based upon the above YAML file
Once the configuration file is installed into the kernel via nftables, the result installed ruleset can be viewed
Example 2 - Network Constructs
brPub421
(linux bridge)
(connects of FRR)
veth
ovsbr0
(Open vSwitch bridge)
vlan421
vbPub421
voPub421
veth
edge01
(lxc)
fw01
(lxc)
veth
eth421 eth421
ve-edge01-v421ve-fw01-v421
enp2s0f1 [physical]
(vxlan encap on ip)
vxPub421
(linux vxlan interface)
(mac/ip to FRR)
(encap over net)
Ex2 - Map Salt -> Interface/BGP Config
# less pillar/net/example/ny1/host01.sls
enp2s0f1:
description: enp2s0f1.host02.ny1.example.net
auto: True
inet: manual
addresses:
- 10.20.3.3/31
bgp:
prefix_lists:
plIpv4ConnIntMgmt:
- prefix: 10.20.3.2/31
neighbors:
- remoteas: 64602
peer:
ipv4: 10.20.3.2
password: oneunified
Mtu: 9000
# Portion of /etc/network/interfaces:
# description: enp2s0f1.host02.ny1.example.net
auto enp2s0f1
iface enp2s0f1
address 10.20.3.3/31
Mtu 9000
# part of bgp route-map
ip prefix-list plIpv4ConnIntMgmt seq 5 permit 10.20.5.0/24
ip prefix-list plIpv4ConnIntMgmt seq 10 permit 10.20.3.2/31
route-map rmIpv4Connected permit 110
match ip address prefix-list plIpv4ConnLoop
set community 64601:1001
!
route-map rmIpv4Connected permit 120
match ip address prefix-list plIpv4ConnIntMgmt
set community 64601:1002 64601:1202
!
route-map rmIpv4Connected permit 130
match ip address prefix-list plIpv4ConnInt
set community 64601:1002
!
route-map rmIpv4Connected deny 190
# linux bash
# ip route show 10.20.3.2/31
10.20.3.2/31 dev enp2s0f1 proto kernel scope link src 10.20.3.3
# free range routing vtysh
host01.ny1# sh ip route 10.20.3.2/31
Routing entry for 10.20.3.2/31
Known via "connected", distance 0, metric 0, best
Last update 07w1d22h ago
* directly connected, enp2s0f1
# vtysh sh run exerpt
router bgp 64601
bgp router-id 10.20.1.1
bgp log-neighbor-changes
no bgp default ipv4-unicast
bgp default show-hostname
coalesce-time 1000
neighbor 10.20.3.2 remote-as 64602
neighbor 10.20.3.2 password oneunified
This exerpt of a pillar file is used to build ...
... BGP Configuration
... interface configuration
With the following run time results:
Parameters in pillar file kept together to
facilitate readability and clarify relationships
VNI -> Pillar for VxLAN
# cat pillar/net/example/ny1/vni.sls
#
# the vni is used to build the second part of a route-descriptor (rd)
# type 0: 2 byte ASN, 4 byte value
# type 1: 4 byte IP, 2 byte value
# type 2: 4 byte ASN, 2 byte value
# if vlans are kept in the range of 1 - 999:
# use a realm of 1 - 64, use rd of
# ip:rrvvv
# up to 16m vxlan identifiers can be used, will need to evolve if/when
# scale requires it
# but... since ebgp is being used predominately, which provides a unique asn to each
# device, it is conceivable that type 0 RDs could be used, which would provide
# for the 16 million vxlan identifiers
vni:
- id: 1012
desc: vlan12 10.20.7.0/24
member:
- 10.20.1.1
- 10.20.1.2
- id: 1101
desc: edge0[1-2] v101
member:
- 10.20.1.1
- 10.20.1.2
- id: 1421
desc: public services
member:
- 10.20.1.1
- 10.20.1.2
Some pillar files have
information shared across
multiple instances –
common configuration
elements are factored out
and included in the top.sls
file where necessary
Auto Config: BGP, Interfaces, Links# exerpt from BGP configuration file
address-family l2vpn evpn
neighbor 10.20.3.2 activate
vni 1101
rd 10.20.1.1:1101
route-target import 10.20.1.2:1101
route-target export 10.20.1.1:1101
exit-vni
vni 1012
rd 10.20.1.1:1012
route-target import 10.20.1.2:1012
route-target export 10.20.1.1:1012
exit-vni
vni 1421
rd 10.20.1.1:1421
route-target import 10.20.1.2:1421
route-target export 10.20.1.1:1421
exit-vni
advertise-all-vni
exit-address-family
# exerpt from BGP configuration file
address-family l2vpn evpn
neighbor 10.20.3.2 activate
vni 1101
rd 10.20.1.1:1101
route-target import 10.20.1.2:1101
route-target export 10.20.1.1:1101
exit-vni
vni 1012
rd 10.20.1.1:1012
route-target import 10.20.1.2:1012
route-target export 10.20.1.1:1012
exit-vni
vni 1421
rd 10.20.1.1:1421
route-target import 10.20.1.2:1421
route-target export 10.20.1.1:1421
exit-vni
advertise-all-vni
exit-address-family
# exerpt from /etc/network/interfaces:
# description: shared external containers
auto vlan421
iface vlan421
pre-up brctl addbr brPub421
pre-up brctl stp brPub421 off
up ip link set dev brPub421 up
pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning
pre-up brctl addif brPub421 vxPub421
up ip link set dev vxPub421 up
pre-up ip link add vbPub421 type veth peer name voPub421
pre-up brctl addif brPub421 vbPub421
pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421
up ip link set dev vbPub421 up
up ip link set dev voPub421 up
down ip link set dev vbPub421 down
down ip link set dev voPub421 down
pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal
post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421
# exerpt from /etc/network/interfaces:
# description: shared external containers
auto vlan421
iface vlan421
pre-up brctl addbr brPub421
pre-up brctl stp brPub421 off
up ip link set dev brPub421 up
pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning
pre-up brctl addif brPub421 vxPub421
up ip link set dev vxPub421 up
pre-up ip link add vbPub421 type veth peer name voPub421
pre-up brctl addif brPub421 vbPub421
pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421
up ip link set dev vbPub421 up
up ip link set dev voPub421 up
down ip link set dev vbPub421 down
down ip link set dev voPub421 down
pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal
post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421
# ip link show dev brPub421
17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group
default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vxPub421
18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN
mode DEFAULT group default qlen 1000
link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff
# ip link show voPub421
19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system
state UP mode DEFAULT group default qlen 1000
link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff
# ip link show vbPub421
20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state
UP mode DEFAULT group default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vlan421
21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff
# ip link show dev brPub421
17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group
default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vxPub421
18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN
mode DEFAULT group default qlen 1000
link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff
# ip link show voPub421
19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system
state UP mode DEFAULT group default qlen 1000
link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff
# ip link show vbPub421
20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state
UP mode DEFAULT group default qlen 1000
link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff
# ip link show vlan421
21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group
default qlen 1000
link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff
a) a simple config is used
to build ...
b) ... the complicated interface configuration in the diagram shown previously ....
c) ... with the resulting instances installed into the kernel
Process
● With salt state, pillar and reactor files defined for all services
and configuration elements, two commands only are necessary
for rebuilding any one of the three cloud management boxes:
– destroy the boot sector
– reboot
Process
● Upon reboot, the physical box obtains the pxeboot installation
files, allocates and formats the file system, installs operating
system, installs Salt agent, and automatically reboots
● Upon the reboot, the Salt agent contacts one of the remaining
Salt Masters, and automatically starts provisioning the system
and services as defined in the Salt state and pillar files.
● LXC containers are instantiated and started at this time
● The Salt agent in each container contacts the Salt Master to
initiate the build of each specific container, using services
supplied in the containers of surviving hosts
Results
● Builds are:
– Fully Documented
– Reproducible
– Repeatable
– Automated
Thank You!
Any Questions?
raymond@burkholder.net
ray@oneunified.net
Extra Slides
Intro To Flows
● Packet: ethernet, ip, udp/tcp, content
● Ethernet: dst mac, src mac
● Protocol: ip/udp/tcp
● IP Five-tuple: proto, source ip/port, dest ip/port
● TCP Flags: syn, ack, ....
●
● Basis for switching, routing, firewalling
Interesting Network Tools
● Open Flow
● EBPF
● XDP

Más contenido relacionado

La actualidad más candente

Open vswitch datapath implementation
Open vswitch datapath implementationOpen vswitch datapath implementation
Open vswitch datapath implementationVishal Kapoor
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とかkotto_hihihi
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsHan Zhou
 
OpenvSwitch Deep Dive
OpenvSwitch Deep DiveOpenvSwitch Deep Dive
OpenvSwitch Deep Diverajdeep
 
Mininet multiple controller
Mininet   multiple controllerMininet   multiple controller
Mininet multiple controllerCatur Mei Rahayu
 
Kubernetes from scratch at veepee sysadmins days 2019
Kubernetes from scratch at veepee   sysadmins days 2019Kubernetes from scratch at veepee   sysadmins days 2019
Kubernetes from scratch at veepee sysadmins days 2019🔧 Loïc BLOT
 
Fedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUFedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUAndrey Vagin
 
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...Toshiaki Hatano
 
Open stack networking vlan, gre
Open stack networking   vlan, greOpen stack networking   vlan, gre
Open stack networking vlan, greSim Janghoon
 
MySQL Galera 集群
MySQL Galera 集群MySQL Galera 集群
MySQL Galera 集群YUCHENG HU
 
CRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersCRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersKirill Kolyshkin
 
Anatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoortersAnatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoortersSadique Puthen
 
OVN operationalization at scale at eBay
OVN operationalization at scale at eBayOVN operationalization at scale at eBay
OVN operationalization at scale at eBayAliasgar Ginwala
 
Percona XtraDB 集群安装与配置
Percona XtraDB 集群安装与配置Percona XtraDB 集群安装与配置
Percona XtraDB 集群安装与配置YUCHENG HU
 
Ovs perf
Ovs perfOvs perf
Ovs perfMadhu c
 

La actualidad más candente (19)

Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
 
Open vswitch datapath implementation
Open vswitch datapath implementationOpen vswitch datapath implementation
Open vswitch datapath implementation
 
Openv switchの使い方とか
Openv switchの使い方とかOpenv switchの使い方とか
Openv switchの使い方とか
 
Large scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutionsLarge scale overlay networks with ovn: problems and solutions
Large scale overlay networks with ovn: problems and solutions
 
OpenvSwitch Deep Dive
OpenvSwitch Deep DiveOpenvSwitch Deep Dive
OpenvSwitch Deep Dive
 
Mininet multiple controller
Mininet   multiple controllerMininet   multiple controller
Mininet multiple controller
 
Kubernetes from scratch at veepee sysadmins days 2019
Kubernetes from scratch at veepee   sysadmins days 2019Kubernetes from scratch at veepee   sysadmins days 2019
Kubernetes from scratch at veepee sysadmins days 2019
 
Fedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIUFedora Virtualization Day: Linux Containers & CRIU
Fedora Virtualization Day: Linux Containers & CRIU
 
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...
Linux Native VXLAN Integration - CloudStack Collaboration Conference 2013, Sa...
 
Open stack networking vlan, gre
Open stack networking   vlan, greOpen stack networking   vlan, gre
Open stack networking vlan, gre
 
Quic illustrated
Quic illustratedQuic illustrated
Quic illustrated
 
MySQL Galera 集群
MySQL Galera 集群MySQL Galera 集群
MySQL Galera 集群
 
CRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux ContainersCRIU: Time and Space Travel for Linux Containers
CRIU: Time and Space Travel for Linux Containers
 
Anatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoortersAnatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoorters
 
IP anycasting
 IP anycasting IP anycasting
IP anycasting
 
OVN operationalization at scale at eBay
OVN operationalization at scale at eBayOVN operationalization at scale at eBay
OVN operationalization at scale at eBay
 
Percona XtraDB 集群安装与配置
Percona XtraDB 集群安装与配置Percona XtraDB 集群安装与配置
Percona XtraDB 集群安装与配置
 
Tag your Routes Before Redistribution
Tag your Routes Before Redistribution Tag your Routes Before Redistribution
Tag your Routes Before Redistribution
 
Ovs perf
Ovs perfOvs perf
Ovs perf
 

Similar a Cloud Monitors Cloud

Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
 
Automating auto-scaled load balancer based on linux and vm orchestrator
Automating auto-scaled load balancer based on linux and vm orchestratorAutomating auto-scaled load balancer based on linux and vm orchestrator
Automating auto-scaled load balancer based on linux and vm orchestratorAndrew Yongjoon Kong
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPBob Melander
 
Tuning OSPF: Bidirectional Forwarding Detection (BFD)
Tuning OSPF: Bidirectional Forwarding Detection (BFD)Tuning OSPF: Bidirectional Forwarding Detection (BFD)
Tuning OSPF: Bidirectional Forwarding Detection (BFD)GLC Networks
 
Tuning OSPF: Prefix Aggregate
Tuning OSPF: Prefix AggregateTuning OSPF: Prefix Aggregate
Tuning OSPF: Prefix AggregateGLC Networks
 
Tuning OSPF: area hierarchy, LSA, and area type
Tuning OSPF:  area hierarchy, LSA, and area typeTuning OSPF:  area hierarchy, LSA, and area type
Tuning OSPF: area hierarchy, LSA, and area typeGLC Networks
 
Mikrotik User Meeting Manila: bgp vs ospf
Mikrotik User Meeting Manila: bgp vs ospfMikrotik User Meeting Manila: bgp vs ospf
Mikrotik User Meeting Manila: bgp vs ospfAchmad Mardiansyah
 
Steering traffic in OSPF: Interface cost
Steering traffic in OSPF: Interface costSteering traffic in OSPF: Interface cost
Steering traffic in OSPF: Interface costGLC Networks
 
PLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDNPLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDNPROIDEA
 
001 network toi_basics_v1
001 network toi_basics_v1001 network toi_basics_v1
001 network toi_basics_v1Hisao Tsujimura
 
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpPushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
 
MPLS on Router OS V7 - Part 1
MPLS on Router OS V7 - Part 1MPLS on Router OS V7 - Part 1
MPLS on Router OS V7 - Part 1GLC Networks
 

Similar a Cloud Monitors Cloud (20)

nested-kvm
nested-kvmnested-kvm
nested-kvm
 
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/NeutronOverview of Distributed Virtual Router (DVR) in Openstack/Neutron
Overview of Distributed Virtual Router (DVR) in Openstack/Neutron
 
PennNet and MAGPI
PennNet and MAGPIPennNet and MAGPI
PennNet and MAGPI
 
Automating auto-scaled load balancer based on linux and vm orchestrator
Automating auto-scaled load balancer based on linux and vm orchestratorAutomating auto-scaled load balancer based on linux and vm orchestrator
Automating auto-scaled load balancer based on linux and vm orchestrator
 
Multicloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRPMulticloud connectivity using OpenNHRP
Multicloud connectivity using OpenNHRP
 
Chapter6ccna
Chapter6ccnaChapter6ccna
Chapter6ccna
 
Chapter6ccna
Chapter6ccnaChapter6ccna
Chapter6ccna
 
Tuning OSPF: Bidirectional Forwarding Detection (BFD)
Tuning OSPF: Bidirectional Forwarding Detection (BFD)Tuning OSPF: Bidirectional Forwarding Detection (BFD)
Tuning OSPF: Bidirectional Forwarding Detection (BFD)
 
Tuning OSPF: Prefix Aggregate
Tuning OSPF: Prefix AggregateTuning OSPF: Prefix Aggregate
Tuning OSPF: Prefix Aggregate
 
Tuning OSPF: area hierarchy, LSA, and area type
Tuning OSPF:  area hierarchy, LSA, and area typeTuning OSPF:  area hierarchy, LSA, and area type
Tuning OSPF: area hierarchy, LSA, and area type
 
Cisco-6500-v1.0-R
Cisco-6500-v1.0-RCisco-6500-v1.0-R
Cisco-6500-v1.0-R
 
10 sdn-vir-6up
10 sdn-vir-6up10 sdn-vir-6up
10 sdn-vir-6up
 
Summit_Tutorial
Summit_TutorialSummit_Tutorial
Summit_Tutorial
 
Mikrotik User Meeting Manila: bgp vs ospf
Mikrotik User Meeting Manila: bgp vs ospfMikrotik User Meeting Manila: bgp vs ospf
Mikrotik User Meeting Manila: bgp vs ospf
 
Steering traffic in OSPF: Interface cost
Steering traffic in OSPF: Interface costSteering traffic in OSPF: Interface cost
Steering traffic in OSPF: Interface cost
 
PLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDNPLNOG 13: Nicolai van der Smagt: SDN
PLNOG 13: Nicolai van der Smagt: SDN
 
Opencontrail network virtualization
Opencontrail network virtualizationOpencontrail network virtualization
Opencontrail network virtualization
 
001 network toi_basics_v1
001 network toi_basics_v1001 network toi_basics_v1
001 network toi_basics_v1
 
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpPushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
 
MPLS on Router OS V7 - Part 1
MPLS on Router OS V7 - Part 1MPLS on Router OS V7 - Part 1
MPLS on Router OS V7 - Part 1
 

Último

Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 

Último (20)

Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 

Cloud Monitors Cloud

  • 2. I am: Raymond Burkholder In and Out of: Software Development Linux Administration Network Management System Monitoring raymond@burkholder.net ray@oneunified.net https://blog.raymond.burkholder.net
  • 3. Cloud Monitors Cloud Upstream 1 Upstream 2 Cloud01 Cloud02 Cloud03 Monitoring Cloud Monitored Cloud
  • 4. Items To Talk About ● Virtualization ● Redundancy & Resiliency ● Networking ● Firewall ● Connectivity ● Open Source Tools: – Iproute2 – kernel tools for building sophisticated connections – Open vSwitch -- for layer 2 switching, firewalling – Free Range Routing -- layer 2/3 route distribution with BGP, EVPN, anycast – LXC -- containers – lighter weight than Docker – Nftables – successor to iptables for ACL with connection tracking – SaltStack – living documentation, automation, orchestration Over All Goals: a) total remote access, b) total re-creation of solution via automation
  • 5. Monitoring Replica – Cloud ‘nn’ nftables dnsmasqcache-ng saltcheck_mksmtp Free Range Routing Open vSwitch
  • 6. Console Serial Connections Cloud01 Cloud03Cloud02 Console Server Console Server PDU PDU MellanoxSw. MellanoxSw. Host HostStorage StorageHost Dual Console Servers for Diagnostics - Side A & Side B
  • 7. Ethernet Management Cloud01 Cloud03Cloud02 Console Server A PDUA PDUB MellanoxSw.A MellanoxSw.B Host HostStorage StorageHost Console Server B Ethernet Management Ports distributed across Cloud interfaces [any Cloudxx can get to any other’s serial interface via one of two console servers]
  • 8. Hand in Hand ● eBGP vs iBGP – Multiple ASNs vs Single ASN (eBGP used in this installation) ● VxLAN vs LAN – 16 million encaps vs 4000 encaps – VXLAN, also called virtual extensible LAN , is designed to provide layer 2 overlay networks on top of a layer 3 network by using MAC address-in-user datagram protocol (MAC-in-UDP) encapsulation. In simple terms, VXLAN can offer the same services as VLAN does, but with greater extensibility and flexibility. ● aka EVPN via MP-BGP (enhanced VPN via Multi-Protocol BGP) used for auto-distribution of VxLAN MAC/IP Layer 2 is cocaine. It has never been right — and yet people keep packaging it in various ways and selling it’s virtues and capabilities. -- @trumanboyes
  • 9. Light vs Heavy Virtualization ● LXC – (Linux Containers) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. ● KVM - (Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions ... that provides the core virtualization infrastructure ... where one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
  • 10. Virtualization Selection ● Since no customer applications are running on the management cloud hosts, light virtualization in the form of LXC containers is used ● Goal is to keep the base host install as plain and simple as possible – all services and management functionality should be segregated into individual containers ● Containers, and their configurations can then be destroyed and rebuilt at will as bugs and upgrades require
  • 11. Containers ● pprx0[1-3] – apt-cacher-ng – package proxy/caching ● edge0[1-2] – edge router ● fw0[1-2] – firewall ● nacl0[1-3] – salt stack master ● bind0[1-3] – dns/bind external resolution ● dmsq0[1-3] – dnsmasq – internal dns, dhcp, pxeboot, tftp ● cmk0[1-3] – check_mk (nagios wrapper) ● smtp0[1-3] – email server, notifications
  • 12. One Physical Instance Public Addressing Private Addressing EDGE FW DMSQ Customer Cloud INTERNET PPRX SSH/VPN SMTP BIND NACL CMK Containers with inter-container routing Some services/containers should not be directly connected to ‘outside’ world, and should instead be proxied via service specific intermediaries.
  • 13. Resiliency ● Choices: – Consul (dns for service resolution) ● Require heartbeats and for each service type – HAProxy (layer 3 load balancing – userland) ● Overkill for service load type – IPVS (l2 kernel based load balancing) ● Only local to the machine – BGP AnyCast (routing based load distribution) ● Proven routing based resiliency
  • 14. AnyCast ● Add Container Unique Loopback Address ● Add Service Common Loopback Address – advertised into BGP by each common service container ● When container dies, common loopback address disappears. ● Loopback addresses are weighted in BGP so local services use local services in preference
  • 15. Host Functions ● Host functions are minimized. Management functions relegated to containers ● Host has main BGP router, connects to BGP instances of each of the other two hosts ● Configured to handle the VxLAN/EVPN MAC/IP advertisements to/from each container ● Keeps container traffic ‘segregated’ from host ‘native’ routing tables – virtualizes networking within and across the hosts
  • 16. eBGP ● Next set of slides show eBGP routing tables to show the resiliency created by routing. ● A non-production two-cloudbox is shown as an example
  • 17. host01.ny1 neighbors host01.ny1# sh ip bgp sum IPv4 Unicast Summary: BGP router identifier 10.20.1.1, local AS number 64601 vrf-id 0 BGP table version 62 RIB entries 55, using 8360 bytes of memory Peers 9, using 174 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd host02.ny1(10.20.3.2) 4 64602 100218 100229 0 0 0 07w4d05h 18 pprx01.ny1(10.20.5.11) 4 64701 100132 100147 0 0 0 09w6d12h 2 nacl01.ny1(10.20.5.12) 4 64702 100139 100157 0 0 0 09w6d06h 2 ntp01.ny1(10.20.5.13) 4 64705 100132 100148 0 0 0 09w6d12h 2 dmsq01.ny1(10.20.5.14) 4 64703 100133 100149 0 0 0 09w6d12h 2 bind01.ny1(10.20.5.15) 4 64706 100133 100150 0 0 0 09w6d12h 2 prxy01.ny1(10.20.5.17) 4 64704 100132 100146 0 0 0 09w6d12h 2 smtp01.ny1(10.20.5.18) 4 64707 100132 100145 0 0 0 09w6d12h 2 fw01.ny1(10.20.5.19) 4 64708 100130 100148 0 0 0 09w6d12h 1 Total number of neighbors 9 host01 has private ASN 64601, host02 has ASN 64602
  • 18. host02.ny1 neighbors host02.ny1# sh ip bgp sum IPv4 Unicast Summary: BGP router identifier 10.20.1.2, local AS number 64602 vrf-id 0 BGP table version 54 RIB entries 55, using 8360 bytes of memory Peers 9, using 174 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd host01.ny1(10.20.3.3) 4 64601 100233 100223 0 0 0 07w4d05h 18 pprx02.ny1(10.20.6.11) 4 64801 100135 100145 0 0 0 09w6d12h 2 nacl02.ny1(10.20.6.12) 4 64802 100135 100145 0 0 0 09w6d12h 2 ntp02.ny1(10.20.6.13) 4 64805 100135 100145 0 0 0 09w6d12h 2 dmsq02.ny1(10.20.6.14) 4 64803 100135 100146 0 0 0 09w6d12h 2 bind02.ny1(10.20.6.15) 4 64806 100136 100147 0 0 0 09w6d12h 2 prxy02.ny1(10.20.6.17) 4 64804 100135 100145 0 0 0 09w6d12h 2 smtp02.ny1(10.20.6.18) 4 64807 100135 100144 0 0 0 09w6d12h 2 fw02.ny1(10.20.6.19) 4 64808 100134 100145 0 0 0 09w6d12h 1 Total number of neighbors 9 Containers on host01 have private ASN 647xx, host02 containers use ASN 648xx
  • 19. host01.ny1 loopbacks view A host01.ny1# sh ip bgp BGP table version is 62, local router ID is 10.20.1.1 Status codes: s suppressed, d damped, h history, * valid, > best, = multipath, i internal, r RIB-failure, S Stale, R Removed Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 10.20.1.1/32 0.0.0.0 0 32768 ? *> 10.20.1.2/32 10.20.3.2 0 0 64602 ? *> 10.20.1.17/32 10.20.5.11 0 0 64701 ? *> 10.20.1.18/32 10.20.5.12 0 0 64702 ? *> 10.20.1.19/32 10.20.5.14 0 0 64703 ? *> 10.20.1.20/32 10.20.5.17 0 0 64704 ? *> 10.20.1.21/32 10.20.5.13 0 0 64705 ? *> 10.20.1.22/32 10.20.5.15 0 0 64706 ? *> 10.20.1.23/32 10.20.5.18 0 0 64707 ? *> 10.20.1.24/32 10.20.5.19 0 0 64708 ? *> 10.20.1.33/32 10.20.3.2 0 64602 64801 ? *> 10.20.1.34/32 10.20.3.2 0 64602 64802 ? *> 10.20.1.35/32 10.20.3.2 0 64602 64803 ? *> 10.20.1.36/32 10.20.3.2 0 64602 64804 ? *> 10.20.1.37/32 10.20.3.2 0 64602 64805 ? *> 10.20.1.38/32 10.20.3.2 0 64602 64806 ? *> 10.20.1.39/32 10.20.3.2 0 64602 64807 ? *> 10.20.1.40/32 10.20.3.2 0 64602 64808 ? ... on next slide Loopbacks 10.20.1.x/32 are unique per container Containers on host01 are seen as local hops Containers on host02 are seen as two hops away via host02
  • 20. host01.ny1 loopbacks view B * 10.20.2.101/32 10.20.3.2 0 64602 64801 ? *> 10.20.5.11 0 0 64701 ? * 10.20.2.102/32 10.20.3.2 0 64602 64802 ? *> 10.20.5.12 0 0 64702 ? * 10.20.2.103/32 10.20.3.2 0 64602 64803 ? *> 10.20.5.14 0 0 64703 ? * 10.20.2.104/32 10.20.3.2 0 64602 64804 ? *> 10.20.5.17 0 0 64704 ? * 10.20.2.105/32 10.20.3.2 0 64602 64805 ? *> 10.20.5.13 0 0 64705 ? * 10.20.2.106/32 10.20.3.2 0 64602 64806 ? *> 10.20.5.15 0 0 64706 ? * 10.20.2.107/32 10.20.3.2 0 64602 64807 ? *> 10.20.5.18 0 0 64707 ? * 10.20.3.2/31 10.20.3.2 0 0 64602 ? *> 0.0.0.0 0 32768 ? *> 10.20.5.0/24 0.0.0.0 0 32768 ? *> 10.20.6.0/24 10.20.3.2 0 0 64602 ? Displayed 28 routes and 36 total paths Loopbacks 10.20.2.x/32 are unique per service Service loopbacks are seen on two separate containers on two different hosts with the local container taking precedence
  • 21. Switches/Routers Desktop Lanner generic router/switch/compute for Management Cloud Mellanox Wire Speed Switching for Customer Cloud
  • 22. Mutual Managmement 1G Management PXE Boot 10G Traffic Interchange Cloud 1 pxeboots off Cloud 2, Cloud 2 pxeboots off Cloud 3, and Cloud 3 pxeboots off Cloud 1 Solves the one management/cloud-box issue of mutual reboot/rebuild/reload
  • 23. Cloud To Cloud – Traffic Interchange Redundancy
  • 24. Salt ● Event-Driven IT Automation Software ● Infrastructure as Code (Self Documenting) ● Amongst other things: – State files – Pillar files – Event Orchestration
  • 25. Salt Layout # pwd /srv # ls -alt total 48 drwxr-xr-x 1 root root 146 Feb 13 00:12 .. drwxr-xr-x 1 root root 204 Nov 1 23:35 .git drwxr-xr-x 1 root root 830 Jul 1 2018 salt drwxr-xr-x 1 root root 338 Jul 1 2018 pillar drwxr-xr-x 1 root root 242 May 24 2018 . drwxr-xr-x 1 root root 10 May 11 2018 reactor
  • 26. Salt State Files # ls salt apc dhcp-relay interface ntpd sheepdog user apt diagnosis ipmi opensmtpd smartmontools users apt-cacher-ng dnsmasq iptables openvswitch squid util apt-mirror frr ipv6 orchestrate ssh vim bash git keepalived ovs_ni sshd virt-manager bind9 highstate libvirt _packages strongswan virt-what bonding hostapd lldpd resolv sudo vrrpd bridge hostname lxc root sysctl zfs cmk ifb mapvlans rsyslog systemd default ifplugd netbox_ipam salt tmux dhclient ifupdown2 nftables sensors top.sls A Configuration may require a combination of many different services
  • 27. Some Pillar Files (YAML) /srv# ls -l pillar/net/example/ny1/ -rw-r--r-- 1 root root 4998 May 11 2018 checklist.txt -rw-r--r-- 1 root root 1320 May 24 2018 dnsmasq.sls drwxr-xr-x 1 root root 234 Mar 3 16:10 host01 -rw-r--r-- 1 root root 6385 Jul 1 2018 host01.sls drwxr-xr-x 1 root root 218 May 24 2018 host02 -rw-r--r-- 1 root root 6272 May 24 2018 host02.sls -rw-r--r-- 1 root root 731 May 11 2018 lxc.sls drwxr-xr-x 1 root root 40 May 11 2018 smtpd -rw-r--r-- 1 root root 1057 May 16 2018 vni.sls /srv# ls -l pillar/net/example/ny1/host01/ -rw-r--r-- 1 root root 2035 May 11 2018 bind01.sls -rw-r--r-- 1 root root 2624 May 24 2018 cmk01.sls -rw-r--r-- 1 root root 243 May 11 2018 cmk-agent.sls -rw-r--r-- 1 root root 2769 May 11 2018 dmsq01.sls -rw-r--r-- 1 root root 4741 May 24 2018 edge01.sls -rw-r--r-- 1 root root 4357 May 24 2018 fw01.sls -rw-r--r-- 1 root root 3320 May 11 2018 nacl01.sls -rw-r--r-- 1 root root 2266 May 11 2018 ntp01.sls -rw-r--r-- 1 root root 2698 May 11 2018 pprx01.sls -rw-r--r-- 1 root root 2693 May 11 2018 prxy01.sls -rw-r--r-- 1 root root 2308 May 11 2018 smtp01.sls YAML: Yet Another Meta Language
  • 28. top.sls (salt/pillar) # salt base: '*.example.net': - apt.sources - apt.common - default.networking - systemd.timesyncd - sshd - ntpd - root - resolv fw0?.ny1.example.net: - ipv6 - hostname - sudo - bash - vim - sysctl.routing - users - frr - ifupdown2 - sshd.ifupdown2 - nftables - cmk.agent ..... # pillar base: '*': - services.ntp fw01.ny1.example.net: - net.example.ny1.host01.fw01 - users fw02.ny1.example.net: - net.example.ny1.host02.fw02 - users ..... All Salt state files defined in salt/top.sls All Salt pillar files defined in pillar/top.sls
  • 29. Example 1 - Salt State File - nftables nftables-packages: pkg.installed: - pkgs: - nftables - iptstate - netstat-nat - pktstat - tcpdump - traceroute - ulogd2 - conntrack - conntrackd - net-tools service_nftables: service.enabled: - name: nftables {% set target = "/etc/nftables.conf" %} {{ target }}: file.managed: - source: salt://nftables/firewall.py.nft - template: py - mode: 644 - user: root - group: root - require: - pkg: nftables-packages nftables-packages: pkg.installed: - pkgs: - nftables - iptstate - netstat-nat - pktstat - tcpdump - traceroute - ulogd2 - conntrack - conntrackd - net-tools service_nftables: service.enabled: - name: nftables {% set target = "/etc/nftables.conf" %} {{ target }}: file.managed: - source: salt://nftables/firewall.py.nft - template: py - mode: 644 - user: root - group: root - require: - pkg: nftables-packages apply_nft: cmd.run: - name: /usr/sbin/nft -f {{ target }} - runas: root - require: - file: {{ target }} - pkg: nftables-packages - onchanges: - file: {{ target }} apply_nft: cmd.run: - name: /usr/sbin/nft -f {{ target }} - runas: root - require: - file: {{ target }} - pkg: nftables-packages - onchanges: - file: {{ target }} Peppered with Jinja2 templating Salt state file is essentially a sequence of recipes for defining/building a particular service Install Packages Ensure service is running Build Configuration file Apply the configuration file
  • 30. Nftables YAML to Config to Runningpolicy: local-private: from: local to: private default: accept local-public: from: local to: public default: accept private-local: from: private to: local default: accept private-public: from: private to: public default: drop public-private: from: public to: private default: drop policy: local-private: from: local to: private default: accept local-public: from: local to: public default: accept private-local: from: private to: local default: accept private-public: from: private to: public default: drop public-private: from: public to: private default: drop public-local: from: public to: local default: drop rule: # salt clients - proto: tcp saddr: - 192.168.195.100 - 172.16.42.192/27 - 172.16.43.192/27 - 172.16.42.224/28 - 172.16.43.224/28 dport: - 4505 - 4506 public-local: from: public to: local default: drop rule: # salt clients - proto: tcp saddr: - 192.168.195.100 - 172.16.42.192/27 - 172.16.43.192/27 - 172.16.42.224/28 - 172.16.43.224/28 dport: - 4505 - 4506 # exerpt from /etc/nftables.conf: add chain ip filter public_local add rule ip filter public_local tcp dport {4505,4506} ip saddr {192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept add rule ip filter public_local tcp dport 22 accept add rule ip filter input iifname eth443 goto public_local add rule ip filter public_local iifname eth443 counter goto loginput add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop # exerpt from /etc/nftables.conf: add chain ip filter public_local add rule ip filter public_local tcp dport {4505,4506} ip saddr {192.168.195.100,172.16.42.192/27,172.16.43.192/27,172.16.42.224/28,172.16.43.224/28} accept add rule ip filter public_local tcp sport {4505,4506} ip saddr {192.168.195.100} accept add rule ip filter public_local tcp dport 22 accept add rule ip filter input iifname eth443 goto public_local add rule ip filter public_local iifname eth443 counter goto loginput add rule ip filter public_local log prefix "public_local:DROP:" group 0 counter drop # exerpt from nft list ruleset: chain public_local { tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept tcp dport ssh accept iifname "eth443" counter packets 155364 bytes 7981388 goto loginput log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop } # exerpt from nft list ruleset: chain public_local { tcp dport { 4505, 4506} ip saddr { 172.16.42.192-172.16.42.239, 172.16.43.192-172.16.43.239, 192.168.195.100} accept tcp sport { 4505, 4506} ip saddr { 192.168.195.100} accept tcp dport ssh accept iifname "eth443" counter packets 155364 bytes 7981388 goto loginput log prefix "public_local:DROP:" group 0 counter packets 0 bytes 0 drop } - proto: tcp saddr: - 192.168.195.100 sport: - 4505 - 4506 # ssh from anywhere - proto: tcp dport: 22 - proto: tcp saddr: - 192.168.195.100 sport: - 4505 - 4506 # ssh from anywhere - proto: tcp dport: 22 A simple zone based firewall configuration in YAML in the pillar file Excerpt from the auto-generated configuration file, based upon the above YAML file Once the configuration file is installed into the kernel via nftables, the result installed ruleset can be viewed
  • 31. Example 2 - Network Constructs brPub421 (linux bridge) (connects of FRR) veth ovsbr0 (Open vSwitch bridge) vlan421 vbPub421 voPub421 veth edge01 (lxc) fw01 (lxc) veth eth421 eth421 ve-edge01-v421ve-fw01-v421 enp2s0f1 [physical] (vxlan encap on ip) vxPub421 (linux vxlan interface) (mac/ip to FRR) (encap over net)
  • 32. Ex2 - Map Salt -> Interface/BGP Config # less pillar/net/example/ny1/host01.sls enp2s0f1: description: enp2s0f1.host02.ny1.example.net auto: True inet: manual addresses: - 10.20.3.3/31 bgp: prefix_lists: plIpv4ConnIntMgmt: - prefix: 10.20.3.2/31 neighbors: - remoteas: 64602 peer: ipv4: 10.20.3.2 password: oneunified Mtu: 9000 # Portion of /etc/network/interfaces: # description: enp2s0f1.host02.ny1.example.net auto enp2s0f1 iface enp2s0f1 address 10.20.3.3/31 Mtu 9000 # part of bgp route-map ip prefix-list plIpv4ConnIntMgmt seq 5 permit 10.20.5.0/24 ip prefix-list plIpv4ConnIntMgmt seq 10 permit 10.20.3.2/31 route-map rmIpv4Connected permit 110 match ip address prefix-list plIpv4ConnLoop set community 64601:1001 ! route-map rmIpv4Connected permit 120 match ip address prefix-list plIpv4ConnIntMgmt set community 64601:1002 64601:1202 ! route-map rmIpv4Connected permit 130 match ip address prefix-list plIpv4ConnInt set community 64601:1002 ! route-map rmIpv4Connected deny 190 # linux bash # ip route show 10.20.3.2/31 10.20.3.2/31 dev enp2s0f1 proto kernel scope link src 10.20.3.3 # free range routing vtysh host01.ny1# sh ip route 10.20.3.2/31 Routing entry for 10.20.3.2/31 Known via "connected", distance 0, metric 0, best Last update 07w1d22h ago * directly connected, enp2s0f1 # vtysh sh run exerpt router bgp 64601 bgp router-id 10.20.1.1 bgp log-neighbor-changes no bgp default ipv4-unicast bgp default show-hostname coalesce-time 1000 neighbor 10.20.3.2 remote-as 64602 neighbor 10.20.3.2 password oneunified This exerpt of a pillar file is used to build ... ... BGP Configuration ... interface configuration With the following run time results: Parameters in pillar file kept together to facilitate readability and clarify relationships
  • 33. VNI -> Pillar for VxLAN # cat pillar/net/example/ny1/vni.sls # # the vni is used to build the second part of a route-descriptor (rd) # type 0: 2 byte ASN, 4 byte value # type 1: 4 byte IP, 2 byte value # type 2: 4 byte ASN, 2 byte value # if vlans are kept in the range of 1 - 999: # use a realm of 1 - 64, use rd of # ip:rrvvv # up to 16m vxlan identifiers can be used, will need to evolve if/when # scale requires it # but... since ebgp is being used predominately, which provides a unique asn to each # device, it is conceivable that type 0 RDs could be used, which would provide # for the 16 million vxlan identifiers vni: - id: 1012 desc: vlan12 10.20.7.0/24 member: - 10.20.1.1 - 10.20.1.2 - id: 1101 desc: edge0[1-2] v101 member: - 10.20.1.1 - 10.20.1.2 - id: 1421 desc: public services member: - 10.20.1.1 - 10.20.1.2 Some pillar files have information shared across multiple instances – common configuration elements are factored out and included in the top.sls file where necessary
  • 34. Auto Config: BGP, Interfaces, Links# exerpt from BGP configuration file address-family l2vpn evpn neighbor 10.20.3.2 activate vni 1101 rd 10.20.1.1:1101 route-target import 10.20.1.2:1101 route-target export 10.20.1.1:1101 exit-vni vni 1012 rd 10.20.1.1:1012 route-target import 10.20.1.2:1012 route-target export 10.20.1.1:1012 exit-vni vni 1421 rd 10.20.1.1:1421 route-target import 10.20.1.2:1421 route-target export 10.20.1.1:1421 exit-vni advertise-all-vni exit-address-family # exerpt from BGP configuration file address-family l2vpn evpn neighbor 10.20.3.2 activate vni 1101 rd 10.20.1.1:1101 route-target import 10.20.1.2:1101 route-target export 10.20.1.1:1101 exit-vni vni 1012 rd 10.20.1.1:1012 route-target import 10.20.1.2:1012 route-target export 10.20.1.1:1012 exit-vni vni 1421 rd 10.20.1.1:1421 route-target import 10.20.1.2:1421 route-target export 10.20.1.1:1421 exit-vni advertise-all-vni exit-address-family # exerpt from /etc/network/interfaces: # description: shared external containers auto vlan421 iface vlan421 pre-up brctl addbr brPub421 pre-up brctl stp brPub421 off up ip link set dev brPub421 up pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning pre-up brctl addif brPub421 vxPub421 up ip link set dev vxPub421 up pre-up ip link add vbPub421 type veth peer name voPub421 pre-up brctl addif brPub421 vbPub421 pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421 up ip link set dev vbPub421 up up ip link set dev voPub421 up down ip link set dev vbPub421 down down ip link set dev voPub421 down pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421 # exerpt from /etc/network/interfaces: # description: shared external containers auto vlan421 iface vlan421 pre-up brctl addbr brPub421 pre-up brctl stp brPub421 off up ip link set dev brPub421 up pre-up ip link add vxPub421 type vxlan id 1421 dstport 4789 local 10.20.1.1 nolearning pre-up brctl addif brPub421 vxPub421 up ip link set dev vxPub421 up pre-up ip link add vbPub421 type veth peer name voPub421 pre-up brctl addif brPub421 vbPub421 pre-up ovs-vsctl --may-exist add-port ovsbr0 voPub421 tag=421 up ip link set dev vbPub421 up up ip link set dev voPub421 up down ip link set dev vbPub421 down down ip link set dev voPub421 down pre-up ovs-vsctl --may-exist add-port ovsbr0 vlan421 tag=421 -- set interface vlan421 type=internal post-down ovs-vsctl --if-exists del-port ovsbr0 vlan421 # ip link show dev brPub421 17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff # ip link show vxPub421 18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff # ip link show voPub421 19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000 link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff # ip link show vbPub421 20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UP mode DEFAULT group default qlen 1000 link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff # ip link show vlan421 21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff # ip link show dev brPub421 17: brPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff # ip link show vxPub421 18: vxPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether ee:38:74:6c:99:3f brd ff:ff:ff:ff:ff:ff # ip link show voPub421 19: voPub421@vbPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000 link/ether 9a:e4:51:35:89:83 brd ff:ff:ff:ff:ff:ff # ip link show vbPub421 20: vbPub421@voPub421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master brPub421 state UP mode DEFAULT group default qlen 1000 link/ether 6e:56:4f:62:7c:82 brd ff:ff:ff:ff:ff:ff # ip link show vlan421 21: vlan421: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 62:06:81:20:29:09 brd ff:ff:ff:ff:ff:ff a) a simple config is used to build ... b) ... the complicated interface configuration in the diagram shown previously .... c) ... with the resulting instances installed into the kernel
  • 35. Process ● With salt state, pillar and reactor files defined for all services and configuration elements, two commands only are necessary for rebuilding any one of the three cloud management boxes: – destroy the boot sector – reboot
  • 36. Process ● Upon reboot, the physical box obtains the pxeboot installation files, allocates and formats the file system, installs operating system, installs Salt agent, and automatically reboots ● Upon the reboot, the Salt agent contacts one of the remaining Salt Masters, and automatically starts provisioning the system and services as defined in the Salt state and pillar files. ● LXC containers are instantiated and started at this time ● The Salt agent in each container contacts the Salt Master to initiate the build of each specific container, using services supplied in the containers of surviving hosts
  • 37. Results ● Builds are: – Fully Documented – Reproducible – Repeatable – Automated
  • 40. Intro To Flows ● Packet: ethernet, ip, udp/tcp, content ● Ethernet: dst mac, src mac ● Protocol: ip/udp/tcp ● IP Five-tuple: proto, source ip/port, dest ip/port ● TCP Flags: syn, ack, .... ● ● Basis for switching, routing, firewalling
  • 41. Interesting Network Tools ● Open Flow ● EBPF ● XDP