5. Docker
● Open source platform for developing,
shipping and running applications using
container virtualization technology
● De-facto standard container technology
● Containers share the same OS kernel
● Avoid replicating (virtualizing) guest OS,
RAM, CPUs, ...
● Containers are isolated from each
other, but can share resources
○ File system volumes
○ Networks
○ … 5
8. Deploying Janus
● Bare metal
● Virtual Machines
● Docker containers
● Cloud instances
● A mix of the above
8
9. Containers deployment strategies
● Most WebRTC failures are network-related
● Different networking modes are available
for containers
○ Host
○ NAT
○ Dedicated IP
● Choosing the most appropriate one is the
main challenge
● Spoiler alert: dedicated IP addresses for the
win!
9
10. Docker networking
10
● The Container Networking Model (CNM)
specifies the networking architecture
for containers technology
○ Sandboxes
○ Endpoints
○ Networks
● Libnetwork
○ Docker’s native implementation of the CNM
○ Leverages the Linux kernel implementation
of the network stack
○ 4 built-in network drivers: host, bridge,
overlay, macvlan
● Docker networking can be tricky!
11. Network drivers: host
● Containers use the network stack of the host machine
○ No namespaces
○ All host ifaces can be directly used by the container
● Easiest networking mode
● Network ports conflicts need to be avoided
● Limits the number of containers running on the same host
● Auto-scaling is difficult
11
12. Network drivers: bridge
● Docker’s default network mode
● Implements NAT functionality
● Containers on the same bridge network communicate over LAN
● Containers on different bridge networks need routing
● Port mapping needed for reachability from the outside
○ Conflicts need to be avoided
12
13. Docker NAT functionality (1/2)
● Docker’s NAT behavior appears to be address independent
(at a first glance)
○ Port Restricted Cone NAT
○ Check out the Janus recently enhanced test_stun feature
● In a dev environment, using the bridge driver is quite a
common choice
● ICE set up expected to succeed thanks to peer reflexive
candidates
● ICE randomly failed :(
○ The Streaming plugin was mostly affected by such failures
○ EchoTest plugin not affected
○ VideoRoom plugin only affected for subscribers 13
14. Docker NAT functionality (2/2)
● Turned out to depend on which party sends the JSEP offer
○ Browser offers, Janus answers → ICE succeeds
○ Janus offers, browser answers → ICE fails
● Tracked down this behavior to libnetfilter, upon which
Docker’s libnetwork is based
● The Docker NAT is not address independent!
○ It sometimes acts like a symmetric NAT
14
21. Takeaways
● Docker networking can be tricky when dealing with ICE
● Host networking limits the number of containers running on the same host
● Ports mapping is not ideal when you want to scale a service up/down as needed
● NATed networks should be fine in a controlled environment, but…
● … things get weird when the browser is also behind a NAT
○ Firefox multiprocess has a built in UDP packet filter
● The new obfuscation of host candidates through mDNS makes things even
worse!
○ Chrome and Safari already there, Firefox coming soon
● Dedicated IP addresses to containers for the win!
○ Macvlan
○ Pipework 21
22. Macvlan
● Docker built-in network driver
● Allows a single (host) physical
iface to have multiple MAC and
IP addresses to assign to
containers
● No need for port publishing
22
23. Pipework
● Tool for connecting together containers in arbitrarily complex scenarios
● https://github.com/jpetazzo/pipework
● Allows to create a new network interface inside a container and set
networking parameters (IP address, netmask, gateway, ...)
○ This new interface becomes the default one for the container
23
$ pipework <hostinterface> [-i containerinterface] <guest>
<ipaddr>/<subnet>[@default_gateway] [macaddr][@vlan]
$ pipework <hostinterface> [-i containerinterface] <guest>
dhcp [macaddr][@vlan]
● If you want to use both IPv4 and IPV6, the IPv6 interface has to be
created first
24. ● The whole IETF Remote Participation Service is based upon Docker
● The NOC team deploys bare metal servers at meeting venues
● Four VMs running on different servers are dedicated to the remote participation
service
● VMs host a bunch of Docker containers
○ Janus
○ Asterisk
○ Tomcat 1 instance of the Meetecho RPS
○ Redis + Node.js (containers share the network stack and have public IPv4 and IPv6 addresses)
○ Nginx
● Eight instances of the Meetecho RPS (one per room)
○ Split on two different VMs
○ A third VM is left idle for failover → containers migration if needed
● Other containers (stats, auth service, TURN, …) running on the fourth VM
Example: IETF Remote Participation
24
26. Janus recording functionality
26
● Janus records individual contributions into MJR files
● MJRs can be converted into Opus/Wave/WebM/MP4 playable
files via the janus-pp-rec tool shipped with Janus
● Individual contributions can be merged together into a single
audio/video file
○ Timing information need to be taken into account to properly sync
media
○ Other info might be needed as well, e.g., time of the first keyframe
written into the MJR
27. Meetecho Melter
● A solution for converting MJR files into videos according to a
given layout
● Leverages the MLT Multimedia Framework
○ https://www.mltframework.org/
● Post-processing and encoding happen on a cluster of
machines hosting Docker containers
○ Initially implemented with CoreOS
○ Moved to Docker native Swarm mode
27
28. Docker Swarm
● Cluster management and orchestration embedded in Docker engine
● Docker engine = swarm node
○ Manager(s)
■ Maintain cluster state through Raft consensus
■ Schedule services
■ Serve the swarm HTTP API
○ Worker(s)
■ Run containers scheduled by managers
● Fault tolerance
○ Containers are re-scheduled if a node
fails
○ The cluster can tolerate up to (N-1)/2
managers failing 28
29. ● Leverage a number of bare metal servers as swarm nodes
● Set the maximum number of containers per node according to nodes’ specs
● Schedule containers according to the above limits
● Solution: exploit Docker networks and the swarm scheduler in a “hacky” way
Challenges
29
30. Swarm-scoped Macvlan network
● On each swarm node create a network configuration
○ The network will have a limited number of IP addresses available (via subnetting)
○ The --aux-address option excludes an IP address from the usable ones
○ Must define non-overlapping ranges of addresses among all nodes
● On the Swarm manager, create a swarm-scoped network from the
defined config
30
$ docker network create --config-only --subnet
192.168.100.0/24 --ip-range 192.168.100.0/29 --gateway
192.168.100.254 --aux-address "a=192.168.100.1" --aux-address
"b=192.168.100.2" meltnet-config
$ docker network create --config-from meltnet-config --scope
swarm -d macvlan meltnet
31. Swarm-scoped Macvlan network
● The manager spawns containers on
the swarm from a docker stack
descriptor
● Each container is plumbed into the
meltnet network
● If a node runs out of IP addresses,
new containers will not be allocated
there until one becomes available
again
● Containers also leverage the NFS
volume driver to read/write to a
shared Network Attached Storage 31