2. OPENSHIFT CONTAINER PLATFORM | Technical Value
2
Self-Service
Multi-language
Automation
Collaboration
Multi-tenant
Standards-based
Web-scale
Open Source
Enterprise Grade
Secure
3. CONFIDENTIAL Customer facing
OpenShift 4 — Everything you need
Any infrastructure
Everything you need, out of the box
1. Fully integrated and automated architecture
2. Seamless Kubernetes deployment on any cloud or on-
premises environment
3. Fully automated installation, from cloud infrastructure
to OS to application services
4. One click platform and application updates
5. Auto-scaling of cloud resources
Cluster services
monitoring, showback,
registry, logging
Application services
middleware, functions,
ISV
Service mesh
Developer services
dev tools, automated
builds, CI/CD, IDE
Automated operations
Enterprise Linux CoreOS
Physical Virtual Private Public
Any infrastructure
CaaS PaaS | Faas
Best IT ops experience Best developer experience
certified
4. Value of OpenShift
OPENSHIFT CONTAINER PLATFORM | Functional Overview
4
Red Hat Enterprise Linux | RHEL CoreOS
Kubernetes
Automated Operations
Cluster Services
Monitoring, Logging,
Registry, Router,
Telemetry
Developer Services
Dev Tools, CI/CD,
Automated Builds, IDE
Application Services
Service Mesh, Serverless,
Middleware/Runtimes,
ISVs
CaaS PaaSBest IT Ops Experience Best Developer ExperienceFaaS
7. VIRTUAL MACHINES AND CONTAINERS
VIRTUAL MACHINES CONTAINERS
VM isolates the hardware Container isolates the process
VM
OS Dependencies
Kernel
Hypervisor
Hardware
App App App App
Container Host (Kernel)
Container
App
OS deps
Container
App
OS deps
Container
App
OS deps
Container
App
OS deps
Hypervisor
Hardware
11. OpenShift Concepts
11
an image repository contains all versions of
an image in the image registry
IMAGE REGISTRY
frontend:latest
frontend:2.0
frontend:1.1
frontend:1.0
mongo:latest
mongo:3.7
mongo:3.6
mongo:3.4
myregistry/frontend myregistry/mongo
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
IMAGE
12. OpenShift Concepts
12
containers are wrapped in pods which are
units of deployment and management
POD
CONTAINER
10.140.4.44
POD
CONTAINER
10.15.6.55
CONTAINER
13. OpenShift Concepts
13
ReplicationControllers &
ReplicaSets ensure a specified number of
pods are running at any given time
image name
replicas
labels
cpu
memory
storage
ReplicaSet
ReplicationController
POD
CONTAINER
POD
CONTAINER ...
POD
CONTAINER
1 2 N
15. OpenShift Concepts
15
a daemonset ensures that all
(or some) nodes run a copy of a pod
foo = bar
Node
image name
replicas
labels
cpu
memory
storage
DaemonSet
foo = bar
Node
foo = baz
Node
POD
CONTAINER
POD
CONTAINER
✓ ✓
16. Dev
OpenShift Concepts
16
configmaps allow you to decouple
configuration artifacts from image content
appconfig.conf
MYCONFIG=true
ConfigMap
POD
CONTAINER
Prod
appconfig.conf
MYCONFIG=false
ConfigMap
POD
CONTAINER
17. OpenShift Concepts
17
secrets provide a mechanism to hold
sensitive information such as passwords
Dev
hash.pw
ZGV2Cg==
ConfigMap
POD
CONTAINER
Prod
hash.pw
cHJvZAo=
ConfigMap
POD
CONTAINER
18. 18
OPENSHIFT & KUBERNETES CONCEPTS
services provide internal load-balancing and
service discovery across pods
POD
SERVICE
“backend
”
CONTAINER
10.110.1.11
role:
backend
POD
CONTAINER
10.120.2.22
role:
backend
POD
CONTAINER
10.130.3.33
role:
backend
POD
CONTAINER
10.140.4.44
role:
frontend
role:
backend
19. 19
OPENSHIFT & KUBERNETES CONCEPTS
apps can talk to each other via services
POD
SERVICE
“backend
”
CONTAINER
10.110.1.11
role:
backend
POD
CONTAINER
10.120.2.22
role:
backend
POD
CONTAINER
10.130.3.33
role:
backend
POD
CONTAINER
10.140.4.44
role:
frontend
role:
backend
20. OpenShift Concepts
20
routes make services accessible to clients outside
the environment via real-world urls
> curl http://app-prod.mycompany.com
POD
SERVICE
“frontend”
CONTAINE
R
role:
fronten
d
POD
CONTAINE
R
role:
fronten
d
POD
CONTAINE
R
role:
fronten
d
role:
frontend
ROUTE
app-prod.mycompany.com
21. OpenShift Concepts
21
projects isolate apps across environments,
teams, groups and departments
PAYMENT DEV
PAYMENT PROD
CATALOG
INVENTORY
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
POD
C
39. Installation Paradigms
OPENSHIFT CONTAINER PLATFORM | Installation
39
Full Stack Automated
Simplified opinionated “Best
Practices” for cluster provisioning
Fully automated installation and
updates including host container
OS.
Pre-existing Infrastructure
Customer managed resources &
infrastructure provisioning
Plug into existing DNS and security
boundaries
OPENSHIFT CONTAINER PLATFORM HOSTED OPENSHIFT
Azure Red Hat OpenShift
Deploy directly from the Azure
console. Jointly managed by Red
Hat and Microsoft Azure engineers.
OpenShift Dedicated
Get a powerful cluster, fully
Managed by Red Hat engineers and
support.
41. Pre-existing Infrastructure Installation
OPENSHIFT CONTAINER PLATFORM | Installation
41
openshift-install deployed
Cloud Resources
RH CoreOS
OCP Cluster
OCP Cluster Resources
Control Plane
Cloud Resources
Worker Nodes
Customer deployed
User managed
Operator managed
Note: Control plane nodes
must run RHEL CoreOS!
RH CoreOSRHEL CoreOS RHEL 7
RHEL
CoreOS
42. Comparison of Paradigms
OPENSHIFT CONTAINER PLATFORM | Installation
42
Full Stack Automation Pre-existing Infrastructure
Build Network Installer User
Setup Load Balancers Installer User
Configure DNS Installer User
Hardware/VM Provisioning Installer User
OS Installation Installer User
Generate Ignition Configs Installer Installer
OS Support Installer: RHEL CoreOS User: RHEL CoreOS + RHEL 7
Node Provisioning / Autoscaling Yes Only for providers with OpenShift
Machine API support
43. 4.2 Supported Providers
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
Full Stack Automation (IPI) Pre-existing Infrastructure (UPI)
Bare Metal
* Support for full stack automated installs to pre-existing VPC &
subnets and deploying as private/internal clusters is planned for 4.3.
**
*
44. Full stack automated deployments of AWS, Azure, GCP & OSP!
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
$ ./openshift-install --dir ./demo create cluster
? SSH Public Key /Users/demo/.ssh/id_rsa.pub
? Platform azure
? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? azure service principal client secret *********************************
INFO Saving user credentials to "/Users/demo/.azure/osServicePrincipal.json"
? Region centralus
? Base Domain example.com
? Cluster Name demo
? Pull Secret [? for help] *************************************************************
INFO Creating infrastructure resources…
INFO Waiting up to 30m0s for the Kubernetes API at https://api.demo.example.com:6443...
INFO API v1.14.0+4788f50 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.demo.example.com:6443 to
initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/Users/demo/openshift-install/demo/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.demo.example.com
INFO Login to the console with user: kubeadmin, password: <password>
$ ./openshift-install --dir ./demo create cluster
? SSH Public Key /Users/demo/.ssh/id_rsa.pub
? Platform gcp
? Service Account (absolute path to file or JSON content)
/Users/demo/.secrets/ServiceAccount.json
INFO Saving the credentials to "/Users/demo/.gcp/osServiceAccount.json"
? Project ID openshift-gce-devel
? Region centralus
? Base Domain example.com
? Cluster Name demo
? Pull Secret [? for help] *************************************************************
INFO Creating infrastructure resources…
INFO Waiting up to 30m0s for the Kubernetes API at https://api.demo.example.com:6443...
INFO API v1.14.0+4788f50 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 30m0s for the cluster at https://api.demo.example.com:6443 to
initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/Users/demo/openshift-install/demo/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.demo.example.com
INFO Login to the console with user: kubeadmin, password: <password>
Simplified Cluster Creation
Easily provision a “best practices” OpenShift cluster on Microsoft Azure
● CLI-based installer with interactive guided workflow
● Installer takes care of provisioning the underlying Infrastructure significantly
reducing deployment complexity
Faster Install
The installer typically finishes within 30 minutes
● Only minimal user input needed with all non-essential install config options now
handled by component operator CRD’s
● Leverages RHEL CoreOS for all node types enabling full stack automation of
installation and updates of both platform and host OS content
45. Deploy to pre-existing infrastructure for AWS, Bare Metal, GCP, & VMware!
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Katherine Dubé
Customized OpenShift Deployments
Enables OpenShift to be deployed to user managed resources and
pre-existing infrastructure.
● Customers are responsible for provisioning all infrastructure
objects including networks, load balancers, DNS, hardware/VMs
and performing host OS installation
● Deployments can be performed both on-premise and to the
public cloud
● OpenShift installer handles generating cluster assets (such as
node ignition configs and kubeconfig) and aids with cluster
bring-up by monitoring for bootstrap-complete and cluster-
ready events
● Example native provider templates (AWS CloudFormation and
Google Deployment Manager) included to help with user
provisioning tasks for creating infrastructure objects
● While RHEL CoreOS is mandatory for the control plane, either
RHEL CoreOS or RHEL 7 can be used for the worker/infra nodes
$ cat ./demo/install-config.yaml
apiVersion: v1
baseDomain: example.com
compute:
- name: worker
replicas: 0
controlPlane:
name: master
...
$ ./openshift-install --dir ./demo create ignition-config
INFO Consuming "Install Config" from target directory
$ ./openshift-install --dir ./demo wait-for bootstrap-complete
INFO Waiting up to 30m0s for the Kubernetes API at
https://api.demo.example.com:6443...
INFO API v1.11.0+c69f926354 up
INFO Waiting up to 30m0s for the bootstrap-complete event...
$ ./openshift-install --dir ./demo wait-for cluster-ready
INFO Waiting up to 30m0s for the cluster at
https://api.demo.example.com:6443 to initialize...
INFO Install complete!
46. OPENSHIFT PLATFORM
Disconnected “Air-gapped” Installation & Upgrading
Generally AvailableProduct Manager: Katherine Dubé
Installation Procedure
● Mirror OpenShift content to local container registry in the disconnected environment
● Generate install-config.yaml: $ ./openshift-install create install-config --dir <dir>
○ Edit and add pull secret (PullSecret), CA certificate (additionalTrustBundle),
and image content sources (ImageContentSources) to install-config.yaml
● Set the OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE environment variable
during the creation of the ignition configs
● Generate the ignition configuration: $ ./openshift-install create ignition-configs --dir
<dir>
● Use the resulting ignition files to bootstrap the cluster deployment
Overview
● 4.2 introduces support for installing and updating OpenShift
clusters in disconnected environments
● Requires local Docker 2.2 spec compliant container registry to
host OpenShift content
● Designed to work with the user provisioned infrastructure
deployment method
○ Note: Will not work with Installer provisioned
infrastructure deployments
Admin
Local Container
Registry
Quay.io
Container
Registry
# mirror update image:
$ oc adm -a <secret_json> release mirror
--from=quay.io/<repo>/<release:version>
--to=<local registry>/<repo>
--to-release-image=<local registry>/<repo:version>
# provide cluster with update image to update to:
$ oc adm upgrade --to-mirror=<local repo:version>
Local Copy of
Update Image
Disconnected
OpenShift Cluster
Red Hat sourced
Update Image
Mirrored to
local registry
Cluster
updated locally
Customer Cluster
49. OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Ben Breard
General Purpose OS Immutable container host
BENEFITS
WHEN TO USE
• 10+ year enterprise life cycle
• Industry standard security
• High performance on any infrastructure
• Customizable and compatible with wide
ecosystem of partner solutions
• Self-managing, over-the-air updates
• Immutable and tightly integrated with
OpenShift
• Host isolation is enforced via Containers
• Optimized performance on popular
infrastructure
When customization and integration with
additional solutions is required
When cloud-native, hands-free
operations are a top priority
Red Hat Enterprise Linux
50. Immutable Operating System
OPENSHIFT PLATFORM
Red Hat Enterprise Linux CoreOS is versioned with
OpenShift
CoreOS is tested and shipped in conjunction with the platform.
Red Hat runs thousands of tests against these configurations.
Red Hat Enterprise Linux CoreOS is managed by the
cluster
The Operating system is operated as part of the cluster, with
the config for components managed by Machine Config
Operator:
● CRI-O config
● Kubelet config
● Authorized registries
● SSH config
v4.1.6
v4.1.6
RHEL CoreOS admins are responsible for:
Nothing.
51. OPENSHIFT PLATFORM
Transactional updates ensure that the Red Hat
CoreOS is never altered during runtime. Rather it is
booted directly into an always “known good” version.
● Each OS update is versioned and tested as an
complete image.
● OS binaries (/usr) are read-only
● Updates encapsulated in container images
● file system and package layering available for
hotfixes and debugging
Transactional Updates via rpm-ostree
52. Provides cluster-level configuration, enables rolling upgrades,
and prevents drift between new and existing nodes. The MCO
is the heart of what makes RHCOS a kube-native operating
system.
Configure Kernel Arguments for the Cluster
● oc create -f 50-kargs.yaml
● oc edit mc/50-kargs
MCO can be paused to suspend operations
Provides control for when changes can be
deployed
Custom MachinePools can have inheritance
Enables MachineConfigs to scale
Machine Config Operator (MCO)
OPENSHIFT PLATFORM
Generally AvailableProduct Manager: Ben Breard
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role:
worker
name: 50-kargs
spec:
KernelArguments:
audit=1
audit_backlog_limit=8192
net.ifnames.prefix=net
53. OpenShift Architecture
53
A lightweight, OCI-compliant container runtime
Minimal and Secure
Architecture
Optimized for
Kubernetes
Runs any OCI-
compliant image
(including docker)
54. BROAD ECOSYSTEM OF WORKLOADS
CRI-O Support in OpenShift
CRI-O 1.13 Kubernetes 1.13 OpenShift 4.1
CRI-O 1.14 Kubernetes 1.14 OpenShift 4.2
CRI-O 1.12 Kubernetes 1.12 OpenShift 4.0
CRI-O tracks and versions identical to Kubernetes, simplifying support permutations
56. OpenShift Bootstrap Process: Self-Managed Kubernetes
OpenShift Installation
How to boot a self-managed cluster:
● OpenShift 4 is unique in that management extends all the way down to the operating system
● Every machine boots with a configuration that references resources hosted in the cluster it joins, enabling cluster to manage itself
● Downside is that every machine looking to join the cluster is waiting on the cluster to be created
● Dependency loop is broken using a bootstrap machine, which acts as a temporary control plane whose sole purpose is bringing up the permanent
control plane nodes
● Permanent control plane nodes get booted and join the cluster leveraging the control plane on the bootstrap machine
● Once the pivot to the permanent control plane takes place, the remaining worker nodes can be booted and join the cluster
Bootstrapping process step by step:
1. Bootstrap machine boots and starts hosting the remote resources required for master machines to boot.
2. Master machines fetch the remote resources from the bootstrap machine and finish booting.
3. Master machines use the bootstrap node to form an etcd cluster.
4. Bootstrap node starts a temporary Kubernetes control plane using the newly-created etcd cluster.
5. Temporary control plane schedules the production control plane to the master machines.
6. Temporary control plane shuts down, yielding to the production control plane.
7. Bootstrap node injects OpenShift-specific components into the newly formed control plane.
8. Installer then tears down the bootstrap node or if user-provisioned, this needs to be performed by the administrator.
61. Rolling Machine Updates
CLOUD-LIKE SIMPLICITY, EVERYWHERE
Generally Available
Single-click updates
● RHEL CoreOS version & config
● Kubernetes core components
● OpenShift cluster components
Configure how many machines can be unavailable
Set the “maxUnavailable” setting in the MachineConfigPool to
maintain high availability while rolling out updates.
The default is 1.
Machine Config Operator (MCO) controls updates
This is a DaemonSet that runs on all Nodes in the cluster. When
you upgrade with oc adm upgrade, the MCO executes these
changes.
Product Manager: Ben Breard
62. CLOUD-LIKE SIMPLICITY, EVERYWHERE
Generally AvailableProduct Manager: Duncan Hardie
Cloud API
● Provide a single view and control across
multiple cluster types
● Machine API:
○ Set up definitions via CRDs
○ Machine: a node
○ MachineSet: think ReplicaSet
○ Actuators roll definitions across
clusters
○ Nodes are drained before deletion
● Cluster Autoscaler: provide/remove
additional nodes on demand
● AWS (4.1), Azure/GCP (target 4.2), VMWare
(Future)
64. The Kubernetes Networking Model
OPENSHIFT SDN
64 Source:
https://kubernetes.io/docs/concepts/cluster-administration/networking/
https://github.com/containernetworking/cni/blob/master/SPEC.md
https://github.com/containernetworking/cni
Container addressability
All containers get a unique cluster-
wide IP address.
Topological Simplicity
The Kubernetes cluster network is flat.
All pods can address each other and
Kubernetes' services directly without
NAT.
Integration
Agents running on a Kubernetes host
can address pods with their logical IP
address.
Container ports can be mapped directly
to host ports.
65. OpenShift SDN: Simple View
OPENSHIFT SDN
65
NODE
172.16.1.10
POD
10.1.2.2
POD
10.1.2.4
NODE
172.16.1.20
POD
10.1.4.2
POD
10.1.4.4
Physical Network
Overlay Network
67. How can we get traffic into an OpenShift cluster?
GETTING TRAFFIC INTO THE CLUSTER
67
Route / Ingress
Standard method
Traffic enters through OpenShift "Router"
Supports web traffic
Node Port
Useful for non-web protocols
Exposes port on every cluster host
External IP
Uses a static IP address assigned to cluster
hosts
Traffic bound to that IP is proxied to the
workload
Must manually track IP addresses
68. GETTING TRAFFIC INTO THE CLUSTER
68
SERVICE
POD POD
"ROUTER" / INGRESS CONTROLLER
EXTERNAL TRAFFIC
ENDPOINT LOOKUP
PROXIED CONNECTIONS
69. Node Port
GETTING TRAFFIC INTO THE CLUSTER
69
NodePort binds a service to a unique port on all the nodes
Traffic received on any node redirects to a node with the
running service
Ports in 30K-60K range which usually differs from the
service
Firewall rules must allow traffic to all nodes on the specific
port NODE
192.10.0.12
NODE
192.10.0.11
NODE
192.10.0.10
SERVICE
INT IP: 172.1.0.20:90
POD
10.1.0.1:90
POD
10.1.0.2:90
POD
10.1.0.3:90
connect
192.10.0.10:31421
192.10.0.11:31421
192.10.0.12:31421
CLIENT
70. External IP
GETTING TRAFFIC INTO THE CLUSTER
70
An IP address is associated with the service and assigned to
the underlying cluster host
Incoming traffic bound for that ip is proxied to the service
The service in turn proxies that traffic to its backing pods
Major drawback is manual bookkeeping of IP addresses
NODE
192.10.0.10 192.10.0.11
SERVICE
EXTERNAL IP: 192.10.0.11
POD
10.1.0.1:90
POD
10.1.0.2:90
connect
192.10.0.11:8443
CLIENT
71. Services Select Pods by Label
SERVICE
app=payroll role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
Name: payroll-frontend
IP: 172.10.1.23
Port: 8080
POD
app=payroll
role=backendversion=1.0 version=1.0
GETTING TRAFFIC INTO THE CLUSTER
71
72. Services Select Pods by Label
SERVICE
app=payroll role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
POD
app=payroll
role=frontend
Name: payroll-frontend
IP: 172.10.1.23
Port: 8080
POD
app=payroll
role=backendversion=2.0 version=1.0 version=1.0
GETTING TRAFFIC INTO THE CLUSTER
72
73. OpenShift Route vs Kubernetes Ingress
GETTING TRAFFIC INTO THE CLUSTER
73
Feature Ingress on OpenShift Route on OpenShift
Standard Kubernetes object X
External access to services X X
Persistent (sticky) sessions X X
Load-balancing strategies X X
Rate-limit and throttling X X
IP whitelisting X X
TLS edge termination for improved security X X
TLS re-encryption for improved security X
TLS passthrough for improved security X
Multiple weighted backends (split traffic) X
Generated pattern-based hostnames X
Wildcard domains X
Source:
https://blog.openshift.com/kubernetes-ingress-vs-openshift-route/
74. Routes can split traffic
SERVICE A
App A App A
SERVICE B
App B App B
ROUTE
10% traffic90% traffic
A/B Testing
Blue/Green
Canary Deployments
GETTING TRAFFIC INTO THE CLUSTER
74
75. apiVersion: v1
kind: Route
metadata:
name: host-route
spec:
host: www.example.com
to:
kind: Service
name: service-name
Route YAML Object
The www.example.com DNS name must resolve
to the router
Router then directs traffic to the pods backing
the service named service-name
GETTING TRAFFIC INTO THE CLUSTER
75
76. apiVersion: v1
kind: Service
metadata:
name: docker-registry
spec:
selector:
docker-registry: default
clusterIP: 172.30.136.123
ports:
- port: 5000
protocol: TCP
targetPort: 5000
Service YAML Object
Selects pods based on label
Serves as single IP and DNS for groups of pods
Serves as simple load balancer
GETTING TRAFFIC INTO THE CLUSTER
76
78. By Default Pod Traffic gets NAT'ed to the Host IP
GETTING TRAFFIC OUT OF THE CLUSTER
78
NODE 1
IP 1
NODE 2
IP 2
PROJECT B
PROJECT A
EXTERNAL
SERVICE
Whitelist: IP1
POD
POD
POD
✓
80. OpenShift Network Plugins
GETTING TRAFFIC AROUND THE CLUSTER
80
NODE
PO
D
PO
D
PO
D
PO
D
NODE
PO
D
PO
D
PO
D
PO
D
PROJECT A PROJECT B
DEFAULT NAMESPACE
✓
PROJECT C
Multitenant Network
Subnet
All pods can communicate with all other pods
Multitenant
Project level isolation.
Network Policy (Default)
Granular policy based isolation
81. Network Policy
GETTING TRAFFIC AROUND THE CLUSTER
81
PROJECT A
POD
POD
POD
POD
PROJECT B
POD
POD
POD
POD
Example Policies
Allow all traffic inside the project
Allow traffic from green to gray
Allow traffic to purple on 8080
✓
✓
8080
5432
✓
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: allow-to-purple-on-8080
spec:
podSelector:
matchLabels:
color: purple
ingress:
- ports:
- protocol: tcp
port: 8080
✓
83. OpenShift SDN: Less Simple View
OPENSHIFT SDN REPRISE
83
Physical Network
br0
Open vSwitch Bridge
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
tun0
iptables
eth0
vxlan
br0
Open vSwitch Bridge
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
tun0
iptables
eth0
vxlan
NODE
172.16.1.10
NODE
172.16.1.20
84. Container processes isolated by kernel namespacing
OPENSHIFT SDN REPRISE
84
POD
10.1.4.2
NETNS
POD
10.1.4.4
NETNS
POD
10.1.4.2
NETNS
POD
10.1.4.4
NETNS
85. Network traffic exits namespaces through veth pairs
OPENSHIFT SDN REPRISE
85
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
POD
10.1.4.2
eth0
NETNS
POD
10.1.4.4
eth0
NETNS
veth veth
86. Open vSwitch bridge routes traffic
OPENSHIFT SDN REPRISE
86
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
87. Cluster traffic exits host using vxlan interface
OPENSHIFT SDN REPRISE
87
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
eth0
vxlan
eth0
vxlan
88. Outbound traffic exits using tunnel interface and iptables
OPENSHIFT SDN REPRISE
88
br0
Open vSwitch Bridge
eth0
veth veth
eth0
br0
Open vSwitch Bridge
veth veth
eth0 eth0
eth0 eth0
tun0
iptables
tun0
iptables
89. Packet Flow: Pod to Pod, Same Host
OPENSHIFT SDN REPRISE
89
NODE
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
192.168.0.100
eth0
POD 2
veth1
10.1.15.3/24
vxlan0
90. Packet Flow: Pod to Pod, Different Host
OPENSHIFT SDN REPRISE
90
NODE 2
NODE 1
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
vxlan0
POD 2
veth0
10.1.20.2/24
br0
10.1.20.1/24
vxlan0
192.168.0.100
eth0
192.168.0.200
eth0
91. Packet Flow: Pod to External Host
OPENSHIFT SDN REPRISE
91
Container to Container on Different HostsNODE 1
POD 1
veth0
10.1.15.2/24
br0
10.1.15.1/24
tun0
192.168.0.100
External
Host
eth0
93. The OpenShift "Router"
GETTING TRAFFIC INTO THE CLUSTER REPRISE
93
Deployed as a Pod
HAProxy instances deployed as pods on compute hosts
Bound to host ports 443/80
All traffic bound for cluster workloads enters through "Router"
Maps FQDN to a Service
Host header is used to determine where to proxy traffic
Dynamically configured
Continually monitors cluster state and reconfigures itself
94. The OpenShift "Router"
GETTING TRAFFIC INTO THE CLUSTER REPRISE
94
RHCOS
NODE
MASTER
API/AUTHENTICATION
DATA STORE
Monitors for changes
CLIENT
80
443
*.apps.ocp4.example.com
or
myapp.example.com
RHCOS
NODE
Request proxied to pod over SDN
RHCOS
95. An OpenShift Service
GETTING TRAFFIC INTO THE CLUSTER REPRISE
95
Pod IP and Service IP stored in etcd
Generated when objects are created
DNS name for service stored in cluster DNS
Route and pod lookups resolve to service ip
Kubelet and kube-proxy modify cluster host iptables rules
IPTables DNAT rules map service ips to pod ips
96. An OpenShift Service
GETTING TRAFFIC INTO THE CLUSTER REPRISE
96
RHCOS
NODE
MASTER
API/AUTHENTICATION
DATA STORE
Kube-proxy monitors for changes
RHCOS
NODE
IPTables DNAT rule from
service IP to pod IP
RHCOS
IPTABLES
KUBE-PROXY
DNS
Pod resolves service DNS
98. ● Security in the RHEL host applies to the
container
● SELINUX and Kernel Namespaces are the
one-two punch no one can beat
● Protects not only the host, but containers
from each other
● Common Criteria cert - including container
framework
SECURITY
Container Security starts with Linux Security
Because Containers start with Linux, Red
Hat’s containers inherit RHEL’s industry
leading security practices and reputation.
CONTAINER CONTAINER
LINUX CONTAINER HOST (KERNEL)
LINUX O/S
DEPENDENCY
LINUX O/S
DEPENDENCY
APP APP
KUBERNETES KUBELET
SELINUX
NAMESPACES
Identity AUDIT/LOGS
SECCOMP
SVIRT
CGROUPS
99. Container Host Security
RHEL CoreOS
Minimal Only what’s needed to run containers
Secure Read-only and locked down
Immutable Immutable image-based deployments and updates
Always up-to-date OS updates are automated and transparent
Updates never break apps Isolates all applications as containers
Updates never break clusters OS components are compatible with the cluster
Supported on infra of choice Inherits majority of the RHEL ecosystem
Simple to configure Installer generated configuration
Effortless to manage Managed by Kubernetes Operators
100. Value of SELinux and OpenShift security profiles
100
Longer blog article on this topic https://www.redhat.com/en/blog/it-starts-linux-how-red-hat-helping-counter-linux-container-security-flaws
Issue: Vulnerability , (CVE-2019-5736) “ Execution of malicious containers allows for container
escape and access to host filesystem“
Red Hat protection: This vulnerability is mitigated on Red Hat Enterprise Linux if SELinux is in
enforcing mode. SELinux in enforcing mode is a pre-requisite for OpenShift Container Platform
and the default seccomp security profiles. Seccomp (secure computing mode) is used to restrict
the set of system calls applications can make, allowing cluster administrators greater control
over the security of workloads running in OpenShift Container Platform.
102. 102
Certificates and Certificate Management
OPENSHIFT SECURITY | Comprehensive features
● OpenShift provides its own internal CA
● Certificates are used to provide secure
connections to
○ master (APIs) and nodes
○ Ingress controller and registry
○ etcd
● Certificate rotation is automated
● Optionally configure external endpoints to
use custom certificates
MASTER✓
NODES✓
INGRESS
CONTROLLER✓
CONSOLE✓
ETCD✓
REGISTRY✓
103. Configuring an Identity Provider
OPENSHIFT PLATFORM
Generally Available
The Cluster Authentication Operator
● Use the cluster-authentication-operator to configure
an Identity Provider. The configuration is stored in the
oauth/cluster custom resource object inside the
cluster.
● Once that’s done, you may choose to remove
kubeadmin (warning: there’s no way to add it back).
● All the identity providers supported in 3.11 are
supported in 4.1: LDAP, GitHub, GitHub Enterprise,
GitLab, Google; OpenID Connect, HTTP request
headers (for SSO), Keystone, Basic authentication.
● For more information:
Understanding identity provider configuration
cluster-authentication-operator
Product Manager: Kirsten Newcomer
105. 105
Fine-Grained RBAC
OPENSHIFT SECURITY | Comprehensive features
● Project scope & cluster scope
available
● Matches request attributes
(verb,object,etc)
● If no roles match, request is
denied ( deny by default )
● Operator- and user-level
roles are defined by default
● Custom roles are supported
107. OPENSHIFT MONITORING | Solution Overview
107
OpenShift Cluster Monitoring
Metrics collection and
storage via Prometheus, an
open-source monitoring
system time series database.
Metrics visualization via
Grafana, the leading metrics
visualization technology.
Alerting/notification via
Prometheus’ Alertmanager, an
open-source tool that handles
alerts send by Prometheus.
111. Observability via
log exploration and corroboration with EFK
OPENSHIFT LOGGING | Solution Overview
Components
○ Elasticsearch: a search and analytics engine to store logs
○ Fluentd: gathers logs and sends to Elasticsearch.
○ Kibana: A web UI for Elasticsearch.
Access control
○ Cluster administrators can view all logs
○ Users can only view logs for their projects
Ability to forward logs elsewhere
○ External elasticsearch, Splunk, etc
111
116. OPENSHIFT STORAGE
Storage Focus
● Cluster Storage Operator
○ Sets up the default storage class
○ Looks through cloud provider and sets up the
correct storage class
● Drivers themselves remain in-tree for now, CSI
versions to follow later
● New GA storage in 4.2
○ Local Volume
○ Raw Block
■ Cloud providers (AWS, GCP, Azure,
vSphere)
■ Local Volume
STORAGE DEVICES
Supported
AWS EBS iSCSI
Azure File & Disk Fibre Channel
GCE PD HostPath
VMware vSphere Disk Local Volume NEW
NFS Raw Block NEW
117. PV Consumption
OPENSHIFT CONTAINER PLATFORM | Persistent Storage
Node
POD
CONTAINE
R
Claim
Z
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: z
PV
Kubelet
Storage
/foo/bar
119. Dynamic Storage Provisioning
OPENSHIFT CONTAINER PLATFORM | Persistent Storage
Admin
StorageClass
Claim
Z
2Gi RWX
Good
Bind
User
...
VolumeMount: Z
Pod Definition
Mount
Fast
NetApp Flash
Block
VMware VMDK
Good
NetApp SSD
Master
NetApp SSD
2Gi NFS
PV
Create
Map
POD
CONTAINE
R
120. OpenShift Container Storage 4.2
OPENSHIFT PLATFORM
● Complete Data Services: RWO, RWX & S3(new) (block, file & object)
● Persistent storage for all OCP Infra and Applications
● Build and deploy anywhere -Consistent Storage Consumption,
management, and operations
OCS 4.2 support with OCP 4.2
● Platform support: AWS and VMware
● Converged Mode support : Run as a service on OCP Cluster
● Consistent S3 across hybrid cloud
OCS 4.3
● Additional Platform: Bare Metal, Azure Cloud
● Independent Mode : Run OCS outside of OCP Cluster
● Hybrid and Multi-cloud S3
Persistent data services for OCP Hybrid Cloud
122. Red Hat Certified Operators
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
STORAGE
SECURITY
DATABASE
DATA SERVICES
APM
DEVOPS
123. OperatorHub data sources
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
Requires an online cluster
● For 4.1, the cluster must have connectivity to the internet
● Later 4.x releases will add offline capabilities
Operator Metadata
● Stored in quay.io
● Fetches channels and available versions for each Operator
Container Images
● Red Hat products and certified partners come from RHCC
● Community content comes from a variety of registries
124. Services ready for your developers
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
New Developer Catalog aggregates apps
● Blended view of Operators, Templates and Broker
backed services
● Operators can expose multiple CRDs. Example:
○ MongoDBReplicaSet
○ MongoDBSharded Cluster
○ MongoDBStandalone
● Developers can’t see any of the admin screens
Self-service is key for productivity
● Developers with access can change settings and test out
new services at any time
125. Operator Deployment
Custom Resource
Definitions
RBAC
API Dependencies
Update Path
Metadata
Operators as a First-Class Citizen
125
YourOperator v1.1.2
Bundle
OPERATOR
LIFECYCLE MANAGER
Deployment
Role
ClusterRole
RoleBinding
ClusterRoleBinding
ServiceAccount
CustomResourceDefinition
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
126. Operator Lifecycle Management
126
OPERATOR
LIFECYCLE MANAGER
YourOperator v1.1.2
YourOperator v1.1.3
YourOperator v1.2.0
YourOperator v1.2.2
Subscription for
YourOperator
Time
VersionOperator Catalog
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
127. Operator Lifecycle Management
127
OPERATOR
LIFECYCLE MANAGER
YourOperator v1.1.2
YourOperator v1.1.3
YourOperator v1.2.0
YourOperator v1.2.2
Subscription for
YourOperator
Time
Version
YourApp v3.0
YourApp v3.1
Y
Operator Catalog
BROAD ECOSYSTEM OF WORKLOADS
Product Manager: Daniel Messer Generally Available
128. Build Operators for your apps
BROAD ECOSYSTEM OF WORKLOADS
Generally AvailableProduct Manager: Daniel Messer
Ansible SDKHelm SDK Go SDK
Helm Chart
Ansible Playbooks
APBs
Build operators from
Helm chart, without any
coding
Build operators from
Ansible playbooks and
APBs
Build advanced operators
for full lifecycle
management
OPERATOR
SDK