Вы находитесь на странице: 1из 129

Docker / Kubernetes / Istio

Containers Container Orchestration Service Mesh

Araf Karsh Hamid : Co-Founder/CTO, MetaMagic Global Inc., NJ, USA


Agenda 2

1 Docker 4 Kubernetes Advanced Concepts


• 12 Factor App Methodology • Quotas / Limits / QoS
• Docker Concepts • Pod / Node Affinity
• Images and Containers • Pod Disruption Budget
• Anatomy of a Dockerfile • Persistent Volume / Claims
• Networking / Volume • Secrets / Jobs / Cron
• Kubernetes Commands
2 Kubernetes
• Kubernetes Concepts 5 Istio
• Namespace / Pods / RelicaSet / • Istio Concepts
• Deployment / Service / Ingress • Gateway / Virtual Service
• Rollout and Undo / Autoscale • Destination Rule / Service Entry
• AB Testing using Canary
3 Kubernetes Networking
• Beta Testing using Canary
• Docker / Kubernetes Networking
• Logging and Monitoring
• Pod to Pod Networking
• Pod to Service Networking 6 Best Practices
• Ingress and Egress – Internet • Docker Best Practices
• Network Policies • Kubernetes Best Practices
21-10-2018
1 3
12 Factor App Methodology
Factors Description
1 Codebase One Code base tracked in revision control
2 Dependencies Explicitly declare dependencies
3 Configuration Configuration driven Apps

4 Backing Services Treat Backing services like DB, Cache as attached resources
5 Build, Release, Run Separate Build and Run Stages
6 Process Execute App as One or more Stateless Process

Source: https://12factor.net/
7 Port Binding Export Services with Specific Port Binding
8 Concurrency Scale out via the process Model
9 Disposability Maximize robustness with fast startup and graceful exit

10 Dev / Prod Parity Keep Development, Staging and Production as similar as possible
11 Logs Treat logs as Event Streams
12 Admin Process Run Admin Tasks as one of Process

21-10-2018
1 4
High Level Objectives #19 Slide No’s

From Creating a Docker Container to Deploying the Container in


Production Kubernetes Cluster. All other activities revolves around
these 8 points mentioned below.
1. Create Docker Images #19 1. Create Pods (Containers)
with Deployments #40-46
2. Run Docker Containers for
testing. #19 2. Create Services #47
#49
3. Push the Containers to 3. Create Traffic Rules (Ingress /
registry #22 Gateway / Virtual Service /
Destination Rules) #97-113
4. Docker image as part of
your Code Pipeline Process. 4. Create External Services
5

Docker Containers

Understanding Containers
Docker Images / Containers
Docker Networking
1 6
What’s a Container?

Looks like a
Walks like a
Virtual
Runs like a Machine
Containers are a Sandbox inside Linux Kernel sharing the kernel with
separate Network Stack, Process Stack, IPC Stack etc.
21-10-2018
1 7
Servers / Virtual Machines / Containers

App 1 App 2 App 3

App 1 App 2 App 3


BINS BINS BINS
/ LIB / LIB / LIB
App App App
BINS BINS BINS
App App App Guest 1 2 3
Guest Guest / LIB / LIB / LIB
1 2 3 OS
OS OS
Guest Guest Guest BINS BINS BINS
BINS / LIB HYPERVISOR OS OS OS / LIB / LIB / LIB

OS Host OS HYPERVISOR Host OS

Hardware Hardware Hardware Hardware

21-10-2018 Server Type 1 Hypervisor Type 2 Hypervisor Container


1 8

Docker containers are Linux Containers


NAME Copy on DOCKER
CGROUPS
SPACES Write CONTAINER

• Kernel Feature • The real magic behind • Images


• Groups Processes containers • Not a File System
• Control Resource • It creates barriers • Not a VHD
Allocation between processes • Basically a tar file
• CPU, CPU Sets • Different Namespaces • Has a Hierarchy
• Memory • PID Namespace • Arbitrary Depth
• Disk • Net Namespace • Fits into Docker
• Block I/O • IPC Namespace Registry
• MNT Namespace
• Linux Kernel Namespace
introduced between
kernel 2.6.15 – 2.6.26

lxc-start docker run

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/ch01

21-10-2018
1 9
Docker Container – Linux and Windows

Control Groups Namespaces Layer Capabilities


Union File Systems:
cgroups Pid, net, ipc, mnt, uts
AUFS, btrfs, vfs

Control Groups Namespaces Layer Capabilities


Object Namespace, Process Registry, UFS like
Job Objects
Table. Networking extensions

Namespaces: Building blocks of the Containers


21-10-2018
10
Docker Key Concepts Images
• Docker images
• A Docker image is a read-only template.
• For example, an image could contain an Ubuntu operating system with Apache and your web
application installed.
• Images are used to create Docker containers.
• Docker provides a simple way to build new images or update existing images, or you can download
Docker images that other people have already created.
• Docker images are the build component of Docker.
Containers
• Docker containers
• Docker containers are similar to a directory.
• A Docker container holds everything that is needed for an application to run.
• Each container is created from a Docker image.
• Docker containers can be run, started, stopped, moved, and deleted.
• Each container is an isolated and secure application platform.
• Docker containers are the run component of Docker.

• Docker Registries
• Docker registries hold images.
• These are public or private stores from which you upload or download images.
• The public Docker registry is called Docker Hub.
• It provides a huge collection of existing images for your use.
• These can be images you create yourself or you can use images that others have previously created.
• Docker registries are the distribution component of Docker.
21-10-2018
1 11
How Docker works….
1 2
Docker Client Docker Daemon Docker Hub

4 3
$ docker search ….
Containers
$ docker build ….
Images
$ docker push ….
$ docker container create ..
$ docker container run ..
$ docker container start ..
$ docker container stop ..
$ docker container ls ..
$ docker swarm ..

1. Search for the Container


2. Docker Daemon Sends the request to Hub
3. Downloads the image
4. Run the Container from the image
21-10-2018
1

Docker Daemon
Client

Cent OS
Linux

Host Linux Kernel


Host Kernel

Kernel Alpine
Host Kernel

Debian
All the containers will have Host Kernel
the same Host OS Kernel
If you require a specific
Kernel version then Host
Kernel needs to be updated
HOST OS (Ubuntu)
21-10-2018 12
1

Docker Daemon
Client

Nano Server
Windows

Windows Kernel
Host Kernel

Kernel Server Core


Host Kernel

Nano Server
All the containers will have Host Kernel
the same Host OS Kernel
If you require a specific
Kernel version then Host
Kernel needs to be updated
HOST OS (Windows 10)
21-10-2018 13
1 14
Docker Image structure
• Images are read-only.
• Multiple layers of image
gives the final Container.
• Layers can be sharable.
• Layers are portable.

• Debian Base image


• Emacs
• Apache
• Writable Container

21-10-2018
1 15
Running a Docker Container
$ docker pull ubuntu Docker pulls the image from the Docker Registry
Creates a Docker Container of Ubuntu OS and runs the container and execute bash shell with a script.
$ ID=$(docker container run -d ubuntu –bin/bash -c “while true; do date; sleep 1; done”)

$ docker container logs $ID Shows output from the( bash script) container

$ docker container ls List the running Containers


21-10-2018
1 16
Anatomy of a Dockerfile
Command Description Example

The FROM instruction sets the Base Image for subsequent instructions. As such, a
valid Dockerfile must have FROM as its first instruction. The image can be any valid FROM ubuntu
FROM FROM alpine
image – it is especially easy to start by pulling an image from the Public repositories

The MAINTAINER instruction allows you to set the Author field of the generated
MAINTAINER images.
MAINTAINER johndoe

The LABEL instruction adds metadata to an image. A LABEL is a key-value pair. To


LABEL version="1.0”
LABEL include spaces within a LABEL value, use quotes and blackslashes as you would in LABEL vendor=“M2”
command-line parsing.
The RUN instruction will execute any commands in a new layer on top of the current
RUN apt-get install -y
RUN image and commit the results. The resulting committed image will be used for the curl
next step in the Dockerfile.
The ADD instruction copies new files, directories or remote file URLs from <src> and ADD hom* /mydir/
ADD adds them to the filesystem of the container at the path <dest>. ADD hom?.txt /mydir/

The COPY instruction copies new files or directories from <src> and adds them to the COPY hom* /mydir/
COPY filesystem of the container at the path <dest>. COPY hom?.txt /mydir/

The ENV instruction sets the environment variable <key> to the value <value>. This
ENV JAVA_HOME /JDK8
ENV value will be in the environment of all "descendent" Dockerfile commands and can be ENV JRE_HOME /JRE8
replaced inline in many as well.

21-10-2018
1 17
Anatomy of a Dockerfile
Command Description Example
The VOLUME instruction creates a mount point with the specified name and marks it as
holding externally mounted volumes from native host or other containers. The value can be a
VOLUME JSON array, VOLUME ["/var/log/"], or a plain string with multiple arguments, such as VOLUME
VOLUME /data/webapps
/var/log or VOLUME /var/log
The USER instruction sets the user name or UID to use when running the image and for any
USER RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile.
USER johndoe

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY
WORKDIR and ADD instructions that follow it in the Dockerfile.
WORKDIR /home/user

There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only
the last CMD will take effect.
CMD echo "This is a test." |
CMD The main purpose of a CMD is to provide defaults for an executing container. These defaults
wc -
can include an executable, or they can omit the executable, in which case you must specify an
ENTRYPOINT instruction as well.
The EXPOSE instructions informs Docker that the container will listen on the
specified network ports at runtime. Docker uses this information to interconnect
EXPOSE containers using links and to determine which ports to expose to the host when
EXPOSE 8080

using the –P flag with docker client.


An ENTRYPOINT allows you to configure a container that will run as an executable. Command
line arguments to docker run <image> will be appended after all elements in an exec form
ENTRYPOINT ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be ENTRYPOINT ["top", "-b"]
passed to the entry point, i.e., docker run <image> -d will pass the -d argument to the entry
point. You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.
21-10-2018
1 18

Build Docker Containers as easy as 1-2-3

Create Build Run


Dockerfile Image Container

1 2 3
21-10-2018
1 19
Build a Docker Java image
1. Create your Dockerfile
• FROM
• RUN
• ADD
• WORKDIR
• USER
• ENTRYPOINT

2. Build the Docker image

$ docker build -t org/java:8 .


3. Run the Container
$ docker container run –it org/java:8
1 20
Docker Container Management
$ ID=$(docker container run –it ubuntu /bin/bash Start the Container and Store ID in ID field
$ docker container stop $ID Stop the container using Container ID
$ docker container stop $(docker container ls –aq) Stops all the containers

$ docker container rm $ID Remove the Container

$ docker container rm $(docker container ls –aq) Remove ALL the Container (in Exit status)

$ docker container start $ID Start the container

$ docker container prune Remove ALL stopped Containers)

$ docker container run –restart=Policy –d –it ubuntu /sh Policies = NO / ON-FAILURE / ALWAYS

$ docker container run –restart=on-failure:3 Will re-start container ONLY 3 times if a


–d –it ubuntu /sh failure happens
21-10-2018
1 21
Docker Container Management
$ ID=$(docker container run –d -i ubuntu) Start the Container and Store ID in ID field
$ docker container exec -it $ID /bin/bash Inject a Process into Running Container

$ ID=$(docker container run –d –i ubuntu) Start the Container and Store ID in ID field
$ docker container exec inspect $ID Read Containers MetaData

$ docker container run –it ubuntu /bin/bash Docker Commit


# apt-get update • Start the Ubuntu Container
# apt-get install—y apache2 • Install Apache
# exit
• Exit Container
$ docker container ls –a • Get the Container ID (Ubuntu)
$ docker container commit –author=“name” – • Commit the Container with new
message=“Ubuntu / Apache2” containerId apache2
name

$ docker container run –cap-drop=chown –it ubuntu /sh To prevent Chown inside the Container

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


1 22
Docker Image Commands
$ docker login …. Log into the Docker Hub to Push images

$ docker push image-name Push the image to Docker Hub

$ docker image history image-name Get the History of the Docker Image

$ docker image inspect image-name Get the Docker Image details

$ docker image save –output=file.tar image-name Save the Docker image as a tar ball.

$ docker container export –output=file.tar c79aa23dd2 Export Container to file.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


1 23
Build Docker Apache image
1. Create your Dockerfile
• FROM alpine
• RUN
• COPY
• EXPOSE
• ENTRYPOINT

2. Build the Docker image


$ docker build -t org/apache2 .

3. Run the Container


$ docker container run –d –p 80:80 org/apache2
$ curl localhost

21-10-2018
1 24
Build Docker Tomcat image
1. Create your Dockerfile
• FROM alpine
• RUN
• COPY
• EXPOSE
• ENTRYPOINT

2. Build the Docker image


$ docker build -t org/tomcat .
3. Run the Container
$ docker container run –d –p 8080:8080 org/tomcat
$ curl localhost:8080

21-10-2018
1 25
Docker Images in the Github Workshop
From Ubuntu
Ubuntu
Build My Ubuntu

From My Ubuntu From My Ubuntu


JRE 8 JRE 11
Build My JRE8 Build My JRE11

From My JRE8 From My JRE 11


Tomcat 8 Tomcat 9 Tomcat 9 Spring Boot
Build My TC8 Build My Boot

From My TC8 From My Boot


My App 1 My App 2 My App 3 My App 4
Build My App 1 Build My App 4

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


1 26

Docker Networking
• Docker Networking – Bridge / Host / None
• Docker Container sharing IP Address
• Docker Communication – Node to Node
• Docker Volumes

21-10-2018
1 27
Docker Networking – Bridge / Host / None
$ docker network ls

$ docker container run --rm --network=host alpine brctl show

$ docker network create tenSubnet –subnet 10.1.0.0/16


21-10-2018
1 28
Docker Networking – Bridge / Host / None https://docs.docker.com/network/#network-drivers

$ docker container run --rm alpine ip address $ docker container run --rm –net=host alpine ip address

$ docker container run –rm –net=none alpine ip address

21-10-2018 No Network Stack


1 $ docker container run –itd –name ipctr alpine ip address 29

Docker Containers
Sharing IP Address

IP
(Container)

$ docker container run –rm –net container:ipctr alpine ip address

Service 1 Service 3
(Container) (Container)

Service 2
(Container)
21-10-2018
1 30
Docker Networking: Node to Node
Node 1 Node 2
Web Server 8080 Microservice 9002 Web Server 8080 Microservice 9002
Container 1 Container 2 Same IP Container 1 Container 2
172.17.3.2 172.17.3.3 172.17.3.2 172.17.3.3
eth0 eth0
Addresses for eth0 eth0
the Containers
Microservice 9003 Microservice 9004 across different Microservice 9003 Microservice 9004
Container 3 Container 4 Nodes. Container 3 Container 4
172.17.3.4 172.17.3.5 172.17.3.4 172.17.3.5
eth0 eth0 eth0 eth0
This requires
Docker0 NAT. Docker0
172.17.3.1/16 172.17.3.1/16
IP tables rules IP tables rules

eth0 eth0
10.130.1.101/24 10.130.1.102/24
21-10-2018
1 31
Docker Volumes
Data Volumes are special directory in the Docker Host.
$ docker volume create hostvolume $ docker volume ls

$ docker container run –it –rm –v hostvolume:/data alpine


# echo “This is a test from the Container” > /data/data.txt

Source: https://github.com/meta-magic/kubernetes_workshop
21-10-2018
1 32
Docker Volumes
$ docker container run - - rm –v $HOME/data:/data alpine Mount Specific File Path

Source: https://github.com/meta-magic/kubernetes_workshop
21-10-2018
33

Kubernetes

21-10-2018
2 Kubernetes Key Aspects
Using yaml or json Internet 34
declare the desired
Architecture Firewall
• Declarative Model state of the app. Worker Node 1 Allows multiple
implementation of Ingress K8s Cluster
• Desired State State is stored in
Master Node (Control Plane) the Cluster store. Port 10255 containers from v1.7

Cluster Kubelet Container Kube-Proxy

Namespace 1
RESTful yaml / json
Kind Store API Server Port 443
Node
gRPC
Runtime Network Proxy
• Pods $ kubectl …. ProtoBuf TCP / UDP Forwarding
• ReplicaSet Manager Interface IPTABLES / IPVS


Deployment
Service
etcd Declarative Model


Endpoints
StatefulSet
Key Value • apiVersion:
• Namespace Store Controller • kind: POD (Cgroup / Namespaces)
• Resource Quota Scheduler • metadata:
• Limit Range Manager • spec: POD itself is a Linux
• Persistent Secrets
Volume Container, Docker
Kind container will run inside
Names

Namespace 2
• Pod the POD. PODs with single
• ReplicaSet or multiple containers
Node End Point Deployment Pod • Service
…. (Sidecar Pattern) will share
Controller Controller Controller Controller
• Deployment Cgroup, Volumes,
• Virtual Service Namespaces of the POD.
For the cloud providers to manage • Gateway, SE, DR
nodes, services, routes, volumes etc. Cloud Controller • Policy, MeshPolicy
• RbaConfig Self healing is done by Kubernetes using watch loops if the desired state is changed.
Cluster IP • Prometheus, Rule,
@ • ListChekcer …
Node Label Selector
Port Service BE Pod IP Address is dynamic, communication should Deployment – Updates and rollbacks, Canary Release
Load be based on Service which will have routable IP
15.1.2.100 ReplicaSet – Self Healing, Scalability, Desired State
Balancer
1.2 and DNS Name. Labels (BE, 1.2) play a critical role
DNS: a.b.com
External in ReplicaSet, Deployment, & Services etc.

D R
Name
POD POD POD
EP Pod IP ...34 ...35 ...36 Label Selector selects pods based on the Labels.

Pod Pod Pod


Annotations

@ BE BE BE Label
Labels

Selector
@
10.1.2.34
1.2 10.1.2.35
1.2 1.2 Label Selector
21-10-2018 10.1.2.36
2 35
Kubernetes Setup – Minikube
• Minikube provides a developer environment with master and a single node
installation within the Minikube with all necessary add-ons installed like DNS,
Ingress controller etc.
• In a real world production environment you will have master installed (with a
failover) and ‘n’ number of nodes in the cluster.
• If you go with a Cloud Provider like Amazon EKS then the node will be created
automatically based on the load.
• Minikube is available for Linux / Mac OS and Windows.
Ubuntu Installation https://kubernetes.io/docs/tasks/tools/install-kubectl/

$ sudo snap install kubectl --classic Install Kubectl using Snap Package Manager
$ kubectl version Shows the Current version of Kubectl

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64


$ chmod +x minikube && sudo mv minikube /usr/local/bin/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 36
Kubernetes Setup – Minikube
Mac OS Installation https://kubernetes.io/docs/tasks/tools/install-kubectl/

$ brew install kubernetes-cli Install Kubectl using brew Package Manager


$ kubectl version Shows the Current version of Kubectl

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-darwin-amd64


$ chmod +x minikube && sudo mv minikube /usr/local/bin/

Windows Installation

C:\> choco install kubernetes-cli Install Kubectl using Choco Package Manager
C:\> kubectl version Shows the Current version of Kubectl
C:\> cd c:\users\youraccount
Create .kube directory
C:\> mkdir .kube
C:\> minikube-installer.exe Install Minikube using Minikube Installer

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


2 37
Kubernetes Setup – Master / Nodes
$ kubeadm init node1$ kubeadm join --token enter-token-from-kubeadm-cmd Node-IP:Port Adds a Node

$ kubectl get nodes $ kubectl cluster-info $ kubectl get namespace $ kubectl config current-context
List all Nodes Shows the cluster details Shows all the namespaces Shows Current Context

Create a set of Pods for Hello World App with an External IP Address (Imperative Model)
$ kubectl run hello-world --replicas=7 --labels="run=load-balancer-example" --image=metamagic/hello:1.0 --port=8080
Creates a Deployment Object and a ReplicaSet object with 7 replicas of Hello-World Pod running on port 8080
$ kubectl expose deployment hello-world --type=LoadBalancer --name=hello-world-service
Creates a Service Object that exposes the deployment (Hello-World) with an external IP Address.

$ kubectl get deployments hello-world List all the Hello-World Deployments $ kubectl get pods –o wide

$ kubectl describe deployments hello-world Describe the Hello-World Deployments List all the Pods with internal IP Address
$ kubectl get replicasets List all the ReplicaSet
$ kubectl delete services hello-world-service
$ kubectl describe replicasets Describe the ReplicaSet
Delete the Service Hello-World-Service
List the Service Hello-World-Service with
$ kubectl get services hello-world-service
Custer IP and External IP $ kubectl delete deployment hello-world
$ kubectl describe services hello-world-service Describe the Service Hello-World-Service Delete the Hello-Word Deployment
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 38

Focus on the Declarative Model

21-10-2018
2 39

3 Fundamental Concepts
1. Desired State
2. Current State
3. Declarative Model
21-10-2018
2 40
Kubernetes Commands – Namespace
(Declarative Model)
• Namespaces are used to group your teams and software’s in
logical business group.
• A definition of Service will add a entry in DNS with respect to
Namespace.
• Not all objects are there in Namespace. Ex. Nodes, Persistent
Volumes etc.

$ kubectl get namespace List all the Namespaces

$ kubectl describe ns ns-name Describe the Namespace


$ kubectl get pods –namespace= ns-name List the Pods from your
namespace
$ kubectl create –f app-ns.yml Create the Namespace

$ kubectl apply –f app-ns.yml Apply the changes to the Namespace

$ kubectl config set-context $(kubectl config current-context) --namespace=your-ns

21-10-2018
The above command will let you switch the namespace to your namespace (your-ns).
2 41
Kubernetes Pods
Atomic Unit
Virtual Server Pod Container

Big Small
• Pod is a shared environment for one of more
Containers.
• Pod in a Kubernetes cluster has a unique IP
address, even Pods on the same Node.
• Pod is a pause Container
$ kubectl create –f app1-pod.yml
$ kubectl get pods

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


2 42
Kubernetes Commands – Pods
(Declarative Model)
$ kubectl create –f app-pod.yml
$ kubectl get pods List all the pods
Create the Pod
$ kubectl describe pods pod-name Describe the Pod details
$ kubectl apply –f app-pod.yml
$ kubectl get pods -o json pod-name List the Pod details in JSON format
Apply the changes to the Pod
$ kubectl get pods -o wide List all the Pods with Pod IP Addresses
$ kubectl replace –f app-pod.yml
$ kubectl describe pods –l app=name Describe the Pod based on the
label value Replace the existing config of the Pod

$ kubectl exec pod-name ps aux $ kubectl exec –it pod-name sh


Execute commands in the first Container in the Pod Log into the Container Shell

$ kubectl exec –it –container container-name pod-name sh


By default kubectl executes the commands in the first container in the pod. If you are running multiple containers (sidecar
pattern) then you need to pass –container flag and give the name of the container in the Pod to execute your command.
You can see the ordering of the containers and its name using describe command.

$ kubectl logs pod-name container-name


Source: https://github.com/meta-magic/kubernetes_workshop 21-10-2018
2 43
Kubernetes ReplicaSet
• Pods wrap around containers with
benefits like shared location,
secrets, networking etc.
• ReplicaSet wraps around Pods and
brings in Replication requirements
of the Pod

• ReplicaSet Defines 2 Things

• Pod Template
• Desired No. of Replicas

What we want is the Desired State.


Game On!
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 44
Kubernetes Commands – ReplicaSet
(Declarative Model)
$ kubectl get rs List all the ReplicaSets

$ kubectl describe rs rs-name Describe the ReplicaSet details

$ kubectl get rs/rs-name Get the ReplicaSet status

$ kubectl create –f app-rs.yml

Create the ReplicaSet which will automatically create all the Pods

$ kubectl apply –f app-rs.yml

Applies new changes to the ReplicaSet. For example Scaling the replicas
from x to x + new value.

$ kubectl delete rs/app-rs cascade=false

Deletes the ReplicaSet. If the cascade=true then deletes all the Pods,
Cascade=false will keep all the pods running and ONLY the ReplicaSet will be
21-10-2018 deleted.
2 45
Kubernetes Commands – Deployment
(Declarative Model)

• Deployments manages
ReplicaSets and

• ReplicaSets manages
Pods

• Deployment is all about


Rolling updates and

• Rollbacks

• Canary Deployments

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


2 46
Kubernetes Commands – Deployment
(Declarative Model)

$ kubectl get deploy app-deploy List all the Deployments

$ kubectl describe deploy app-deploy Describe the Deployment details

$ kubectl rollout status deployment app-deploy Show the Rollout status of the Deployment

$ kubectl rollout history deployment app-deploy Show Rollout History of the Deployment

Creates Deployment
Deployments contains Pods and its Replica information. Based on
$ kubectl create –f app-deploy.yml the Pod info Deployment will start downloading the containers
(Docker) and will install the containers based on replication factor.

$ kubectl apply –f app-deploy.yml --record Updates the existing deployment.

$ kubectl rollout undo deployment app-deploy - -to-revision=1


Rolls back or Forward to a specific version number
of your app.
$ kubectl rollout undo deployment app-deploy - -to-revision=2

$ kubectl scale deployment app-deploy - -replicas=6 Scale up the pods to 6 from the initial 2 Pods.
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 47
Kubernetes Services
Why do we need Services?
• Accessing Pods from Inside the
Cluster
• Accessing Pods from Outside
• Autoscale brings Pods with new IP
Addresses or removes existing Pods.
• Pod IP Addresses are dynamic.
Service Types
Service will have a stable IP Address. 1. Cluster IP (Default)
2. Node Port
Service uses Labels to associate with a 3. Load Balancer
set of Pods
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4. External Name
2 48
Kubernetes Commands – Service / Endpoints
(Declarative Model)
 Cluster IP (default) - Exposes the Service
on an internal IP in the cluster. This type
$ kubectl get svc List all the Services
makes the Service only reachable from
within the cluster.
$ kubectl describe svc app-service Describe the Service details
 Node Port - Exposes the Service on the
$ kubectl get ep app-service List the status of the Endpoints
same port of each selected Node in the
$ kubectl describe ep app-service cluster using NAT. Makes a Service
Describe the Endpoint Details accessible from outside the cluster
using <NodeIP>:<NodePort>. Superset
of ClusterIP.
Create a Service for the Pods.
Service will focus on creating a  Load Balancer - Creates an external load
routable IP Address and DNS for balancer in the current cloud (if
$ kubectl create –f app-service.yml the Pods Selected based on the supported) and assigns a fixed, external
labels defined in the service. IP to the Service. Superset of NodePort.
Endpoints will be automatically
created based on the labels in  External Name - Exposes the Service
using an arbitrary name (specified
the Selector. by external Name in the spec) by
returning a CNAME record with the
$ kubectl delete svc app-service Deletes the Service. name. No proxy is used. This type
requires v1.7 or higher of kube-dns.
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 49
Kubernetes Ingress
(Declarative Model)

An Ingress is a collection of rules


that allow inbound connections to
reach the cluster services.

Ingress is still a beta feature in


Kubernetes

Ingress Controllers are Pluggable.

Ingress Controller in AWS is linked to


AWS Load Balancer.
Source: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 50
Kubernetes Ingress
(Declarative Model)

An Ingress is a collection of rules


that allow inbound connections to
reach the cluster services.

Ingress is still a beta feature in


Kubernetes

Ingress Controllers are Pluggable.

Ingress Controller in AWS is linked to


AWS Load Balancer.
Source: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers
21-10-2018
2 51
Kubernetes Auto Scaling Pods
(Declarative Model)

• You can declare the Auto


scaling requirements for every
Deployment (Microservices).
• Kubernetes will add Pods based
on the CPU Utilization
automatically.
• Kubernetes Cloud
infrastructure will automatically
add Nodes if it ran out of
available Nodes. CPU utilization kept at 10% to demonstrate the auto
scaling feature. Ideally it should be around 80% - 90%
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
2 52
Kubernetes Horizontal Pod Auto Scaler
Deploy your app with auto scaling parameters
$ kubectl autoscale deployment appname --cpu-percent=50 --min=1 --max=10

$ kubectl get hpa

Generate load to see auto scaling in action


$ kubectl run -it podshell --image=metamagicglobal/podshell
Hit enter for command prompt
$ while true; do wget -q -O- http://yourapp.default.svc.cluster.local; done

To attach to the running container


$ kubectl attach podshell-name -c podshell -it
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
3 53

Kubernetes Networking
• Comparison between Docker and Kubernetes Networking
• Kubernetes DNS
• Pod to Pod Networking within the same Node
• Pod to Pod Networking across the Node
• Pod to Service Networking
• Ingress - Internet to Service Networking
• Egress – Pod to Internet Networking
21-10-2018
3 54
Kubernetes Networking
Mandatory requirements for Network implementation

1. All Pods can communicate with All other Pods


without using Network Address Translation
(NAT).
2. All Nodes can communicate with all the Pods
without NAT.
3. The IP that is assigned to a Pod is the same IP the
Pod sees itself as well as all other Pods in the
21-10-2018
cluster. Source: https://github.com/meta-magic/kubernetes_workshop
3 55
Kubernetes Networking
3 Networks

Networks CIDR Range (RFC 1918)


1. Physical Network 1. 10.0.0.0/8
2. Pod Network 2. 172.0.0.0/11
3. Service Network 3. 192.168.0.0/16

Keep the Address ranges separate.


21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
3 56
Kubernetes Networking
3 Networks
1. Physical Network

Node 1 Node 2 Node 3


eth0 10.130.1.102/24 eth0 eth0
10.130.1.103/24 10.130.1.104/24
3. Service Network
Pod 1 Pod 2 Pod 1 Pod 1
Container 1 Container 1 Container 1 Container 1 VIP
10.17.4.1 10.17.4.2 10.17.5.1 10.17.6.1 172.10.1.2/16
End Points
eth0 eth0 eth0 eth0
handles
dynamic IP
Service
Addresses of

veth0 veth1
the Pods EP EP EP
veth1 veth1 selected by a
Service based
on Pod Labels

2. Pod Network
Source: https://github.com/meta-magic/kubernetes_workshop
3 57
Docker Networking Vs. Kubernetes Networking
Node 1 Node 2 Node 1 Node 2

Web Server 8080 Microservice 9002 Web Server 8080 Microservice 9002 Web Server 8080 Microservice 9002 Web Server 8080 Microservice 9002

Container 1 Container 2 Container 1 Container 2 Container 1 Container 2 Container 1 Container 2


172.17.3.2 172.17.3.3 172.17.3.2 172.17.3.3 10.17.3.2 10.17.3.3 10.17.4.2 10.17.4.3

eth0 eth0 eth0 eth0 eth0 eth0 eth0 eth0

Microservice 9003 Microservice 9004 Microservice 9003 Microservice 9004 Microservice 9003 Microservice 9004 Microservice 9003 Microservice 9004

Container 3 Container 4 Container 3 Container 4 Container 3 Container 4 Container 3 Container 4


172.17.3.4 172.17.3.5 172.17.3.4 172.17.3.5 10.17.3.4 10.17.3.5 10.17.4.4 10.17.4.5

eth0 eth0 eth0 eth0 eth0 eth0 eth0 eth0

Docker0 Same IP Range Docker0 L2 Bridge Uniq IP Range L2 bridge


172.17.3.1/16
NAT Required 172.17.3.1/16 10.17.3.1/16 Based on 10.17.4.1/16
netFilter &
IP tables rules IP tables rules IP tables rules IP tables rules
IP Tables
or IPVS
eth0 eth0 eth0 eth0
10.130.1.101/24 10.130.1.102/24 10.130.1.101/24 No NAT 10.130.1.102/24

21-10-2018 Docker Networking Kubernetes Networking


3 58
Kubernetes DNS
Kubernetes DNS to avoid IP Addresses in the configuration or Application Codebase.
It Configures Kubelet running on each Node so the containers uses DNS Service IP to
resolve the IP Address.

A DNS Pod consists of three separate containers


1. Kube DNS: Watches the Kubernetes Master for changes in Service and Endpoints
2. DNS Masq: Adds DNS caching to Improve the performance
3. Sidecar: Provides a single health check endpoint to perform health checks for
Kube DNS and DNS Masq.

• DNS Pod itself is a Kubernetes Service with a Cluster IP.


• DNS State is stored in etcd.
• Kube DNS uses a library the converts etcd name – value pairs into DNS Records.
• Core DNS is similar to Kube DNS but with a plugin Architecture in v1.11 Core DNS is
21-10-2018
the default DNS Server. Source: https://github.com/meta-magic/kubernetes_workshop
3 59
Kubernetes: Pod to Pod Networking inside a Node
By Default Linux has a Single Namespace and all the process in
Node 1 the namespace share the Network Stack. If you create a new
namespace then all the process running in that namespace will
have its own Network Stack, Routes, Firewall Rules etc.
Pod 1 Pod 2

Bridge implements ARP to discover link-


Container 1 $ ip netns add namespace1 Create Namespace
10.17.3.2
A mount point for namespace1 is created under /var/run/netns
Container 2 Container 1 $ ip netns List Namespace
1 10.17.3.2 10.17.3.3 4
Forwarding Tables

1. Pod 1 sends packet to eth0 – eth0 is connected to

layer MAC Address


eth0 eth0
veth0
2. Bridge resolves the Destination with ARP protocol
2 3 and
veth0 veth1
3. Bridge sends the packet to veth1
L2 Bridge 10.17.3.1/16
4. veth1 forwards the packet directly to Pod 2 thru eth0
Kube Proxy Root NW Namespace
This entire communication happens in localhost. So Data
eth0 10.130.1.101/24 transfer speed will NOT be affected by Ethernet card speed.
21-10-2018
3 60
Kubernetes: Pod to Pod Networking Across Node
Src: Pod1 – Dst: Pod3
1. Pod 1 sends packet to eth0 –
Node 1 Node 2 eth0 is connected to veth0
2. Bridge will try to resolve the
Pod 1 Pod 2 Pod 3 Destination with ARP protocol
and ARP will fail because there
Container 1
is no device connected to that
10.17.3.2
IP.
Container 2 Container 1 Container 1 3. On Failure Bridge will send the
1 10.17.3.2 10.17.3.3 6 10.17.4.1 packet to eth0 of the Node 1.
4. At this point packet leaves eth0
Forwarding Tables

eth0 eth0 eth0 and enters the Network and


network routes the packet to
2 5 Node 2.
veth0 veth1 veth0 5. Packet enters the Root
L2 Bridge 10.17.3.1/16 L2 Bridge 10.17.4.1/16 namespace and routed to the
3 Kube Proxy Root NW Namespace Kube Proxy Root NW Namespace L2 Bridge.
4 6. veth0 forwards the packet to
eth0 10.130.1.101/24 eth0 10.130.1.102/24
eth0 of Pod 3
21-10-2018
3 61
Kubernetes: Pod to Service to Pod – Load Balancer
Src: Pod1 – Dst: Service1 Src: Pod1 – Dst: Pod3 1. Pod 1 sends packet to eth0 – eth0 is
connected to veth0
Node 1 Node 2 2. Bridge will try to resolve the Destination
with ARP protocol and ARP will fail
Pod 1 Pod 2 Pod 3 because there is no device connected to
that IP.
Container 1 3. On Failure Bridge will give the packet to
10.17.3.2 Kube Proxy
Container 2 Container 1 Container 1 4. it goes thru ip tables rules installed by
1 10.17.3.2 10.17.3.3 7 10.17.4.1 Kube Proxy and rewrites the Dst-IP with
Pod3-IP. IP tables has done the Cluster
Forwarding Tables

eth0 load Balancing directly on the node and


eth0 eth0
packet is given to eth0.
5. Now packet leaves Node 1 eth0 and
2 6 enters the Network and network routes
veth0 veth1 veth0
the packet to Node 2.
3 L2 Bridge 10.17.3.1/16 L2 Bridge 10.17.4.1/16
6. Packet enters the Root namespace and
4 Kube Proxy Root NW Namespace Kube Proxy Root NW Namespace
routed to the L2 Bridge.
5
eth0 10.130.1.101/24 eth0 10.130.1.102/24 7. veth0 forwards the packet to eth0 of
21-10-2018 Pod 3
3 62
Kubernetes Pod to Service to Pod – Return Journey
Src: Service1– Dst: Pod1 1. Pod 3 receives data from Pod 1 and
Src: Pod3 – Dst: Pod1 sends the reply back with Source as
Node 1 Node 2 Pod3 and Destination as Pod1
2. Bridge will try to resolve the Destination
Pod 1 Pod 2 Pod 3 with ARP protocol and ARP will fail
because there is no device connected to
Container 1 that IP.
10.17.3.2 3. On Failure Bridge will give the packet
Node 2 eth0
Container 2 Container 1 Container 1
10.17.3.2 10.17.3.3 10.17.4.1 4. Now packet leaves Node 2 eth0 and
7 enters the Network and network routes
1
Forwarding Tables

eth0 the packet to Node 1. (Dst = Pod1)


eth0 eth0
5. it goes thru ip tables rules installed by
Kube Proxy and rewrites the Src-IP with
6 2 Service-IP. Kube Proxy gives the packet
veth0 veth1 veth0
to L2 Bridge.
L2 Bridge 10.17.3.1/16 3 L2 Bridge 10.17.4.1/16
6. L2 bridge makes the ARP call and hand
Kube Proxy Root NW Namespace Kube Proxy Root NW Namespace
5 over the packet to veth0
4
eth0 10.130.1.101/24 eth0 10.130.1.102/24 7. veth0 forwards the packet to eth0 of
21-10-2018 Pod1
3 63
Kubernetes: Pod to Internet 1. Pod 1 sends packet to eth0 – eth0 is
connected to veth0
Src: Pod1 – Dst: Google Src: VM-IP –
Dst: Google
2. Bridge will try to resolve the Destination
Node 1 with ARP protocol and ARP will fail because
Src: Ex-IP – there is no device connected to that IP.
Dst: Google
3. On Failure Bridge will give the packet to IP
Pod 1 Pod 2
Tables
Container 1 4. The Gateway will reject the Pod IP as it will
10.17.3.2
7 recognize only the VM IP. So source IP is
Container 2 Container 1 replaced with VM-IP
Google
1 10.17.3.2 10.17.3.3 5. Packet enters the network and routed to
Internet Gateway.
Forwarding Tables

eth0 eth0 6. Packet reaches the GW and it replaces the


VM-IP (internal) with an External IP.
2 7. Packet Reaches External Site (Google)
veth0 veth1
3 L2 Bridge 10.17.3.1/16
On the way back the packet follows the same
4 Kube Proxy Root NW Namespace
6 path and any Src IP mangling is un done and
eth0 10.130.1.101/24 5 each layer understands VM-IP and Pod IP within
VM Gateway Pod Namespace.
3 64
Kubernetes: Internet to Pod 1. Client Connects to App published
Src: Client IP – Domain.
Dst: Pod IP
Node X 2. Once the Load Balancer receives the
Src: Client IP – packet it picks a VM.
Dst: VM-IP
Pod 8 3. Once inside the VM IP Tables knows
Src: Client IP – how to redirect the packet to the Pod
Dst: App Dst using internal load Balancing rules
installed into the cluster using Kube
Container 1 Client / Proxy.
7 10.17.4.1 User
4. Traffic enters Kubernetes cluster and
eth0 reaches the Node X

1 5. Node X gives the packet to the L2


6 Bridge
veth0
L2 Bridge 10.17.4.1/16 6. L2 bridge makes the ARP call and hand
5 over the packet to veth0
Kube Proxy Root NW Namespace VM VM
Ingress
eth0 10.130.1.102/24 3 2 Load 7. veth0 forwards the packet to eth0 of
4
VM Balancer Pod 8
21-10-2018
3 65
Networking Glossary
Layer 2 Networking Source Network Address Netfilter – Packet Filtering in Linux
Translation
Layer 2 is the Data Link Layer Software that does packet filtering,
(OSI Mode) providing Node to SNAT refers to a NAT procedure that NAT and other Packet mangling
Node Data Transfer. modifies the source address of an
IP Packet.
IP Tables
Layer 4 Networking
Destination Network Address It allows Admin to configure the
Transport layer controls the Translation netfilter for managing IP traffic.
reliability of a given link through
flow control. DNAT refers to a NAT procedure
that modifies the Destination IPVS – IP Virtual Server
address of an IP Packet.
Layer 7 Networking
Implements a transport layer load
ConnTrack balancing as part of the Linux
Application layer networking
Kernel. It’s similar to IP Tables and
(HTTP, FTP etc) This is the closet
Conntrack is built on top of netfilter based on netfilter hook function
layer to the end user.
to handle connection tracking.. and uses hash table for the lookup.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


3 66
Kubernetes Network Policies

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


3 67
OSI Layers

21-10-2018
4 68

Kubernetes Pods Advanced


• Quality of Service: Resource Quota and Limits
• Environment Variables and Config Maps
• Pod in Depth / Secrets / Presets
• Pod Disruption Range
• Pod / Node Affinity
• Persistent Volume / Persistent Volume Claims

21-10-2018
4 69
Kubernetes Pod Quality of Service
QoS: QoS: QoS:
Guaranteed Burstable Best Effort

Memory limit = != Guaranteed No


Memory Request and Memory OR
Has either CPU Request /
CPU Limit = Memory OR limits
CPU Request CPU Request
Source: https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 70
Kubernetes Resource Quotas

• A resource quota, defined by a Resource


Quota object, provides constraints that
limit aggregate resource consumption per
namespace.

• It can limit the quantity of objects that can


be created in a namespace by type, as well
as the total amount of compute resources
that may be consumed by resources in
that project.
Source: https://kubernetes.io/docs/concepts/policy/resource-quotas/

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 71
Kubernetes Limit Range

• Limits specifies the Max resource a Pod


can have.

• If there is NO limit is defined, Pod will


be able to consume more resources
than requests. However, the eviction
chances of Pod is very high if other Pods
with Requests and Resource Limits are
defined.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 72
Kubernetes Pod Environment Variables

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 73
Kubernetes Adding Config to Pod
Config Maps allow you to
decouple configuration artifacts
from image content to keep
containerized applications
portable.

Source: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4 74
Kubernetes Pod in Depth 3 kinds of action handlers can be configured to perform
A probe is an indicator to a container's health. It against a container:
judges the health through periodically performing a
diagnostic action against a container via kubelet: exec: Executes a defined command inside the container.
Considered to be successful if the exit code is 0.
• Liveness probe: Indicates whether a container is
alive or not. If a container fails on this probe, tcpSocket: Tests a given port via TCP, successful if the
kubelet kills it and may restart it based on the port is opened.
restartPolicy of a pod.
httpGet: Performs an HTTP GET to the IP address of
• Readiness probe: Indicates whether a container is target container. Headers in the request to be sent is
ready for incoming traffic. If a pod behind a customizable. This check is considered to be healthy if
service is not ready, its endpoint won't be created the status code satisfies: 400 > CODE >= 200.
until the pod is ready.

Additionally, there are five parameters to define a probe's behavior:


initialDelaySeconds: How long kubelet should be waiting for before the first probing.
successThreshold: A container is considered to be healthy when getting consecutive times of probing successes
passed this threshold.
failureThreshold: Same as preceding but defines the negative side.
timeoutSeconds: The time limitation of a single probe action.
21-10-2018
periodSeconds: Intervals between probe actions. Source: https://github.com/meta-magic/kubernetes_workshop
4 75
Kubernetes
Pod Liveness Probe

• Liveness probe: Indicates


whether a container is alive
or not. If a container fails on
this probe, kubelet kills it
and may restart it based on
the restartPolicy of a pod.

Source: https://kubernetes.io/docs/tasks/configure-pod-
container/configure-liveness-readiness-probes/

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 76
Kubernetes Pod Secrets
Objects of type secret are intended to hold
sensitive information,
such as passwords,
OAuth tokens, and ssh keys.
Putting this information in a secret is safer
and more flexible than putting it verbatim
in a pod definition or in a docker

Source: https://kubernetes.io/docs/concepts/configuration/secret/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4 77
Kubernetes Pod Presets
A Pod Preset is an API resource for injecting
additional runtime requirements into a Pod
at creation time. You use label selectors to
specify the Pods to which a given Pod
Preset applies.

Using a Pod Preset allows pod template


authors to not have to explicitly provide all
information for every pod. This way,
authors of pod templates consuming a
specific service do not need to know all the
details about that service.
Source: https://kubernetes.io/docs/concepts/workloads/pods/podpreset/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4 78
Kubernetes Pod Disruption Range
• A PDB limits the number pods
of a replicated application that
are down simultaneously from
voluntary disruptions.

• Cluster managers and hosting


providers should use tools
which respect Pod Disruption
Budgets by calling the Eviction
API instead of directly deleting
pods. $ kubectl drain NODE [options]
Source: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4 79
Kubernetes Pod/Node Affinity / Anti-Affinity
• You can constrain a pod to only be
able to run on particular nodes or
to prefer to run on particular
nodes. There are several ways to
do this, and they all uselabel
selectors to make the selection.

• Assign the label to Node


• Assign Node Selector to a Pod

$ kubectl label nodes k8s.node1 disktype=ssd


Source: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
4 80
Kubernetes Pod Configuration
Pod configuration

You use labels and annotations to attach metadata to your resources. To inject data into your
resources, you’d likely create ConfigMaps (for non-confidential data) or Secrets (for confidential data).

Taints and Tolerations - These provide a way for nodes to “attract” or “repel” your Pods. They are often
used when an application needs to be deployed onto specific hardware, such as GPUs for scientific
computing. Read more.

Pod Presets - Normally, to mount runtime requirements (such as environmental variables, ConfigMaps,
and Secrets) into a resource, you specify them in the resource’s configuration file. PodPresets allow you
to dynamically inject these requirements instead, when the resource is created. For instance, this
allows team A to mount any number of new Secrets into the resources created by teams B and C,
without requiring action from B and C.
Source: https://kubernetes.io/docs/user-journeys/users/application-developer/advanced/

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 81
Kubernetes Volumes for Stateful Pods
Persistent Volume Claims are mounted as
Persistent Volume Claim
/ Storage Class Volumes inside the Pod

Provision
Network Request Use
Storage Storage Storage
Static / Dynamic

1 2 3
21-10-2018
4 82
Kubernetes Volume Volume Mode
• There are two modes
Persistent Volume Access Mode
• File System and or
• A Persistent Volume is the • ReadOnlyMany: Can be • raw Storage Block.
physical storage available. mounted as read-only by many
nodes • Default is File System.
• Storage Class is used to configure
• ReadWriteOnce: Can be Reclaim Policy
custom Storage option (nfs, cloud
mounted as read-write by a
storage) in the cluster. They are
single node Retain: The volume will need to
the foundation of Dynamic
Provisioning. • ReadWriteMany: Can be be reclaimed manually
mounted as read-write by many Delete: The associated storage
• Persistent Volume Claim is used nodes asset, such as AWS EBS, GCE PD,
to mount the required storage
Azure disk, or OpenStack Cinder
into the Pod.
volume, is deleted
Persistent Persistent Recycle: Delete content only (rm
Storage Class
Volume Volume Claim -rf /volume/*)

Source: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
4 83
Kubernetes Volume Types
Host Based Block Storage
Life cycle of a
o Amazon EBS
Persistent Volume
o EmptyDir
o HostPath o OpenStack Cinder
o Local o GCE Persistent Disk o Provisioning
o Azure Disk
Distributed File System
o vSphere Volume o Binding
o NFS
o Ceph
Others
o Using
o Gluster
o FlexVolume o iScsi o Releasing
o PortworxVolume o Flocker
o Amazon EFS o Git Repo o Reclaiming
o Azure File System o Quobyte

Source: https://github.com/meta-magic/kubernetes_workshop
4 84
Kubernetes Persistent Volume - hostPath
• HostPath option is to make the Volume available from the
Host Machine.
1
• A Volume is created and its linked with a storage provider. In
the following example the storage provider is Minikube for
the host path.
• Any PVC (Persistent Volume Claim) will be bound to the
Persistent Volume which matches the storage class.
• If it doesn't match a dynamic persistent volume will be
created.

Storage class is mainly


meant for dynamic
provisioning of the
persistent volumes.

Persistent Volume is not


bound to any specific
namespace.
Source: https://github.com/meta-magic/kubernetes_workshop
Change the above path in your system
4 85
Persistent Volume - hostPath 3
Pod Access storage by issuing a
• Persistent Volume Claim Persistent Volume Claim.
and Pods with In the following example Pod
Deployment properties claims for 2Gi Disk space from the
are bound to a specific network on the host machine.
namespace.

• Developer is focused on 2
the availability of
storage space using PVC
and is not bothered
about storage solutions
or provisioning.

• Ops Team will focus on


Provisioning of
Persistent Volume and
Storage class.
Source: https://github.com/meta-magic/kubernetes_workshop
4 86
Persistent Volume - hostPath
1. Create Static Persistent Volumes and Dynamic Volumes (using Storage Class)
2. Persistent Volume Claim is created and bound static and dynamic volumes.
3. Pods refer PVC to mount volumes inside the Pod. Running the Yaml’s
from the Github

2
3
Source: https://github.com/meta-magic/kubernetes_workshop
4 87
Kubernetes Persistent Volume – AWS EBS
• Use a Network File System or Block Storage for Pods to access
and data from multiple sources. AWS EBS is such a storage 1
system.
• A Volume is created and its linked with a storage provider. In
the following example the storage provider is AWS for the
EBS.
• Any PVC (Persistent Volume Claim) will be bound to the
Persistent Volume which matches the storage class.

Storage class is mainly


meant for dynamic
provisioning of the
persistent volumes.

Persistent Volume is not


bound to any specific
namespace.

Source: https://github.com/meta-magic/kubernetes_workshop
$ aws ec2 create-volume - -size 100 Volume ID is auto generated
4 88
Persistent Volume – AWS EBS 3
• Manual Provisioning of Pod Access storage by issuing a
the AWS EBS supports Persistent Volume Claim.
ReadWriteMany, In the following example Pod
However all the pods claims for 2Gi Disk space from
are getting scheduled the network on AWS EBS.
into a Single Node.

Source: https://github.com/meta-magic/kubernetes_workshop
• For Dynamic
Provisioning use
2
ReadWriteOnce.

• Google Compute Engine


also doesn't support
ReadWriteMany for
dynamic provisioning.

https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes
4 89

Kubernetes Advanced features


• Jobs
• Daemon Set
• Container Level features
• Kubernetes Commands – Quick Help
• Kubernetes Commands – Field Selectors

21-10-2018
4 90
Kubernetes Jobs
A job creates one or more pods and ensures that a
specified number of them successfully terminate.
As pods successfully complete, the job tracks the
successful completions. When a specified number
of successful completions is reached, the job itself
is complete. Deleting a Job will cleanup the pods it
created.

A simple case is to create one Job object in order to


reliably run one Pod to completion. The Job object
will start a new Pod if the first pod fails or is deleted
(for example due to a node hardware failure or a
node reboot).

A Job can also be used to run multiple pods in Command is wrapped for display purpose.
parallel.
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop Source: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
4 91

Kubernetes DaemonSet
A DaemonSet ensures that all (or some) Nodes run a copy of a
Pod. As nodes are added to the cluster, Pods are added to them.
As nodes are removed from the cluster, those Pods are garbage
collected. Deleting a DaemonSet will clean up the Pods it created.

Some typical uses of a DaemonSet are:


• running a cluster storage daemon, such as glusterd, ceph, on
each node.

• running a logs collection daemon on every node, such


as fluentd or logstash.

• running a node monitoring daemon on every node, such


as Prometheus Node Exporter, collectd, Dynatrace OneAgent,
Datadog agent, New Relic agent, Ganglia gmond or Instana
agent.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 92
Kubernetes Container Level Features

Container-level features

Sidecar container: Although your Pod should still have a single main
container, you can add a secondary container that acts as a helper
(see a logging example). Two containers within a single Pod can
communicate via a shared volume.

Init containers: Init containers run before any of a Pod’s app


containers (such as main and sidecar containers)

Source: https://kubernetes.io/docs/user-journeys/users/application-developer/advanced/
21-10-2018
4 93
Kubernetes Commands – Quick Help
(Declarative Model)

Pods $ kubectl describe pods pod-name

$ kubectl get pods $ kubectl exec –it pod-name sh


$ kubectl create –f app-pod.yml
$ kubectl get pods -o json pod-name $ kubectl exec pod-name ps aux
$ kubectl apply –f app-pod.yml
$ kubectl get pods –show-labels
$ kubectl replace –f app-pod.yml
$ kubectl get pods –all-namespaces

ReplicaSet $ kubectl describe rs app-rs

$ kubectl create –f app-rs.yml $ kubectl get rs $ kubectl delete rs/app-rs cascade=false

$ kubectl get rs/app-rs Cascade=true will delete all the pods


$ kubectl apply –f app-rs.yml

$ kubectl replace –f app-rs.yml

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 94
Kubernetes Commands – Quick Help
(Declarative Model)
Service
$ kubectl get svc

$ kubectl describe svc app-service


$ kubectl create –f app-service.yml $ kubectl delete svc app-service
$ kubectl get ep app-service
$ kubectl apply –f app-service.yml
$ kubectl describe ep app-service
$ kubectl replace –f app-service.yml

Deployment
$ kubectl get deploy app-deploy

$ kubectl create –f app-deploy.yml $ kubectl describe deploy app-deploy $ kubectl rollout undo deployment
app-deploy - -to-revision=1
$ kubectl apply –f app-deploy.yml $ kubectl rollout status deployment app-deploy

$ kubectl replace –f app-deploy.yml $ kubectl rollout history deployment app-deploy

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


4 95
Kubernetes Commands – Field Selectors
Field selectors let you select Kubernetes resources based on the value of one or
more resource fields. Here are some example field selector queries:
• metadata.name=my-service
• metadata.namespace!=default
• status.phase=Pending
$ kubectl get pods --field-selector status.phase=Running Get the list of pods where status.phase = Running

Supported Operators
You can use the =, ==, and != operators with field selectors (= and == mean the
same thing). This kubectl command, for example, selects all Kubernetes Services
that aren’t in the default namespace:
$ kubectl get services --field-selector metadata.namespace!=default

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
21-10-2018
4 96
Kubernetes Commands – Field Selectors
Chained Selectors
As with label and other selectors, field selectors can be chained together as a
comma-separated list. This kubectl command selects all Pods for which
the status.phase does not equal Running and the spec.restartPolicy field
equals Always:
$ kubectl get pods --field-selector=status.phase!=Running,spec.restartPolicy=Always

Multiple Resource Type


You use field selectors across multiple resource types. This kubectl command
selects all Statefulsets and Services that are not in the default namespace:
$ kubectl get statefulsets,services --field-selector metadata.namespace!=default

Source: https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/
21-10-2018
97

Service Mesh: Istio

Gateway
Virtual Service
Destination Rule
5 98
Istio Components
Data Plane Control Plane

Envoy Mixer Pilot Citadel


Envoy is deployed • Enforces access Provides Provides
as a Sidecar in the control and • Service Discovery • Strong Service to
same K8S Pod. usage policies • Traffic Management Service and end
across service • Routing user Authentication
• Dynamic Service • Resiliency (Timeouts,
mesh and with built-in
Discovery Circuit Breakers, etc.)
Identity and
• Load Balancing • Collects
credential
• TLS Termination telemetry data management.
• HTTP/2 and gRPC
Proxies from Envoy and Galley
• Circuit Breakers other services. Provides • Can enforce policies
• Health Checks based on Service
• Staged Rollouts with • Configuration identity rather than
% based traffic split • Also includes a Injection network controls.
• Fault Injection flexible plugin • Processing and
• Rich Metrics • Distribution
model.
Component of Istio
21-10-2018
5 99
Service Mesh – Sidecar Design Pattern
Customer Microservice Order Microservice

UI Layer UI Layer

Process 1
Business Logic Business Logic
Web Services Web Services

Application Localhost calls Application Localhost calls


http://localhost/order/processOrder http://localhost/payment/processPayment

Network Stack Network Stack

Process 2
Service Mesh
Service Mesh

Sidecar
Sidecar
Microservice CB LB SD Service
Mesh
CB LB SD
CB – Circuit Breaker Data Plane Calls
Router Router
LB – Load Balancer
SD – Service Discovery
Service Discovery Calls
Control Plane will have all the rules for Routing and
Service Discovery. Local Service Mesh will download the Routing Service
rules from the Control pane will have a local copy. Service Mesh Control Plane Rules Discovery

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 100
Service Mesh – Traffic Control

Traffic Control rules can be End User


Order v1.0
applied for API Gateway
Business Logic
Business Logic
Business Logic
Business Logic
Business Logic
• different Microservices Service Mesh
Service Mesh
Customer Service
SidecarMesh
versions SidecarMesh
Service
SidecarMesh
Service
Sidecar
Sidecar
• Re Routing the request Business Logic
Service
Cluster
to debugging system to Service Mesh Order v2.0

analyze the problem in Sidecar


Business Logic
Business Logic
real time. Traffic Rules

Service Mesh Service Mesh


• Smooth migration path Control Plane
Service Mesh
Sidecar
Sidecar

Admin

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 101
Why Service Mesh?

• Multi Language / Technology • Stakeholders: Identify whose


stack Microservices requires a affected.
standard telemetry service.
• Incentives: What Service
• Adding SSL certificates across Mesh brings onto the table.
all the services.
• Concerns: Their worries
• Abstracting Horizontal
• Mitigate Concerns
concerns

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 102
Istio Sidecar Automatic Injection

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 103
Istio – Traffic Management
Configures a load balancer for HTTP/TCP Configures the set of policies
traffic, most commonly operating at the Gateway to be applied to a request
edge of the mesh to enable ingress traffic after Virtual Service routing
for an application. has occurred.

Defines the rules that control how


requests for a service are routed within Virtual Service Destination Rule
an Istio service mesh.
Routing Rules Policies

• Match • Traffic Policies


• URI Patterns • Load Balancer
• URI ReWrites
• Headers
• Routes
• Fault
• Fault
• Route
• Weightages
21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop
5 104
Istio Gateway

Configures a load balancer for


HTTP/TCP traffic, most commonly
operating at the edge of the
mesh to enable ingress traffic for
an application.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 105
Istio Virtual Service

Defines the rules that


control how requests for
a service are routed
within an Istio service
mesh.

21-10-2018 Source: https://github.com/meta-magic/kubernetes_workshop


5 106
Istio Destination Rule

Configures the set of


policies to be applied to
a request after Virtual
Service routing has
occurred.

Source: https://github.com/meta-magic/kubernetes_workshop

21-10-2018
5 107
Shopping Portal – Docker / Kubernetes
UI Pod N2
Firewall UI Pod N2
UI Service
Load Balancer EndPoints UI Pod N1
Internal
/ui Load Balancers

Source: https://github.com/meta-magic/kubernetes_workshop
Product Pod N3
EndPoints
Product MySQL
/productms Product Pod
Pod
Service
Product Pod N4
Service Call
/productreview Kube DNS

Review Pod N4
Ingress Review Review Pod N1
Service
EndPoints Review Pod N3

Kubernetes Objects Deployment / Replica / Pod Nodes


21-10-2018
5 108
Shopping Portal - Istio Istio Control Plane Pilot Mixer Citadel

UI Pod N2
Firewall
UI
Load Balancer Service
UI Pod N2
Gateway Destination EndPoints UI Pod N1
Rule
/ui Internal

Source: https://github.com/meta-magic/kubernetes_workshop
Load Balancers

EndPoints Product Pod N3


Destination Product MySQL
/productms Product Pod
Pod
Rule Service
Product Pod N4
Service Call
/productreview Kube DNS
Destination
Rule Review Pod N4
Virtual Service Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 109
Shopping Portal A / B Testing using Istio Control Plane P M C
Canary Deployment
EndPoints v1 UI Pod N2
Firewall
Stable / v1 UI
Load Balancer Service
UI Pod N2
Gateway Destination UI Pod N1
Rule Canary v2 N5
User X = Canary v2 UI Pod

Source: https://github.com/meta-magic/kubernetes_workshop
/ui Others = Stable

EndPoints Product Pod N3


Destination Product MySQL
Product Pod
/productms Rule Service Pod
Product Pod N4
/productreview Internal Service Call
Load Balancers
Kube DNS
Destination Review Pod N4
Virtual Service Rule Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 110
Shopping Portal Traffic Shifting Istio Control Plane P M C
Canary Deployment
EndPoints v1 UI Pod N2
Firewall
Stable / v1 UI
Load Balancer Service
UI Pod N2
Gateway Destination UI Pod N1
Rule Canary v2 N5
10% = Canary v2 UI Pod

Source: https://github.com/meta-magic/kubernetes_workshop
/ui 90% = Stable

EndPoints Product Pod N3


Destination Product MySQL
Product Pod
/productms Rule Service Pod
Product Pod N4
/productreview Internal Service Call
Load Balancers
Kube DNS
Destination Review Pod N4
Virtual Service Rule
Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 111
Shopping Portal Blue Green Deployment Istio Control Plane P M C

EndPoints v1 UI Pod N2
Firewall
Stable / v1 UI
Load Balancer Service
UI Pod N2
Gateway Destination UI Pod N1
Rule Canary v2
v2 UI Pod N5

Source: https://github.com/meta-magic/kubernetes_workshop
/ui 100% = Stable

EndPoints Product Pod N3


Destination Product MySQL
Product Pod
/productms Rule Service Pod
Product Pod N4
/productreview Internal Service Call
Load Balancers
Kube DNS
Destination Review Pod N4
Virtual Service Rule
Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 112
Shopping Portal Mirror Data Istio Control Plane P M C

EndPoints v1 UI Pod N2
Firewall
Stable / v1 UI
Load Balancer Service
UI Pod N2
Gateway Destination UI Pod N1
Rule Canary v2
v2 UI Pod N5
100% = Stable

Source: https://github.com/meta-magic/kubernetes_workshop
/ui Mirror = Canary

EndPoints Product Pod N3


Destination Product MySQL
Product Pod
/productms Rule Service Pod
Product Pod N4
/productreview Internal Service Call
Load Balancers
Kube DNS
Destination Review Pod N4
Virtual Service Rule
Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 113
Shopping Portal Fault Injection Istio Control Plane P M C

EndPoints v1 UI Pod N2
Firewall
UI
Load Balancer Service
UI Pod N2
Gateway Destination UI Pod N1
Internal
Rule Load Balancers

Source: https://github.com/meta-magic/kubernetes_workshop
/ui EndPoints Product Pod N3
Destination Product MySQL
Product Pod
/productms Rule Service Pod
Product Pod N4
Fault Injection Service Call
Delay = 2 Sec Kube DNS
Abort = 10%
/productreview
Destination Review Pod N4
Virtual Service Rule
Review
Review Pod N1
Service
Istio Objects EndPoints Review Pod N3

Kubernetes Objects Istio Sidecar - Envoy Deployment / Replica / Pod Nodes


5 114

Amazon AWS
• Virtual Private Network / Subnets
• Internet Gateway
• Routes

21-10-2018
5 115
Create VPC & Subnet
$ aws ec2 create-vpc --cidr-block 10.0.0.0/16 When you create a VPC, just define
{
"Vpc": { • one network CIDR block and
"VpcId": "vpc-7532a92g", • AWS region.
"InstanceTenancy": "default",
• For example, CIDR 10.0.0.0/16 on us-east-1.
"Tags": [],
"State": "pending",
"DhcpOptionsId": "dopt-3d901958", You can define any network address range (between
"CidrBlock": "10.0.0.0/16" /16 to /28 netmask range).
}
} Create one or more subnets within VPC.

$ aws ec2 create-subnet --vpc-id 7532a92g", --cidr-block 10.0.1.0/24 -- availability-zone us-east-1a

{ "Subnet": { "VpcId": "vpc- 7532a92g", ", "CidrBlock": "10.0.1.0/24", "State": "pending",


"AvailabilityZone": "us-east-1a", "SubnetId": "subnet-f92x9g72", "AvailableIpAddressCount": 251 } }

$ aws ec2 create-subnet --vpc-id vpc- 7532a92g --cidr-block 10.0.2.0/24 -- availability-zone us-east-1b

{ "Subnet": { "VpcId": " vpc- 7532a92g ", "CidrBlock": "10.0.2.0/24", "State": "pending", "AvailabilityZone":
"us-east-1b", "SubnetId": "subnet-16938e09", "AvailableIpAddressCount": 251 } }
21-10-2018
5 116
Create Gateway and Attach it
$ aws ec2 create-internet-gateway You need to have a Internet Gateway for
{ your VPC to connect to the internet.
"InternetGateway": {
"Tags": [], Create an Internet Gateway and attach that
"InternetGatewayId": "igw-b837249v1", to the VPC.
“Attachments": []
} Set the routing rules for the subnet to point
}
to the gateway.
Attach VPC to the Gateway

$ aws ec2 attach-internet-gateway --vpc-id vpc-7532a92g --internet-gateway- id igw-b837249v1

Create Route table for the VPC

$ aws ec2 create-route-table --vpc-id vpc-7532a92g

21-10-2018
5 117
Create Routes
$ aws ec2 create-route-table --vpc-id vpc-7532a92g Create Route table for the VPC
{ "RouteTable":
{ "Associations": [],
"RouteTableId": "rtb-ag89x582",
"VpcId": "vpc-7532a92g",
"PropagatingVgws": [],
"Tags": [], "Routes": [
{ "GatewayId": "local",
"DestinationCidrBlock": "10.0.0.0/16",
"State": "active",
"Origin": "CreateRouteTable"
}
]
}}
Attach VPC to the Gateway

$ aws ec2 create-route --route-table-id rtb-ag89x582 --gateway-id igw-b837249v1 --destination-cidr-block 0.0.0.0/0

21-10-2018
Best Practices

Docker Best Practices


Kubernetes Best Practices 118
6 119
Build Small Container Images 1

• Simple Java Web Apps with Ubuntu & Tomcat can have a size of
700 MB
• Use Alpine Image as your base Linux OS
• Alpine images are 10x smaller than base Ubuntu images
• Smaller Image size reduce the Container vulnerabilities.
• Ensure that only Runtime Environments are there in your
container. For Example your Alpine + Java + Tomcat image
should contain only the JRE and NOT JDK.
• Log the App output to Container Std out and Std error.
21-10-2018
6 120
Docker: To Root or Not to Root! 2

• Create Multiple layers of Images Alpine

• Create a User account


JRE 8
• Add Runtime software’s based on the User
Account. Tomcat 8

• Run the App under the user account


My App 1
• This gives added security to the container.
• Add Security module SELinux or AppArmour
to increase the security,

21-10-2018
6 121
Docker: Container Security 3

1. Secure your HOST OS! Containers runs on Host Kernel.

2. No Runtime software downloads inside the container.


Declare the software requirements at the build time itself.

3. Download Docker base images from Authentic site.

4. Limit the resource utilization using Container orchestrators


like Kubernetes.

5. Don’t run anything on Super privileged mode.

21-10-2018
6 122
Kubernetes: Naked Pods 4
• Never use a Naked Pod, that is Pod without any
ReplicaSet or Deployments. Naked pods will never
get re-scheduled if the Pod goes down.
• Never access a Pod directly from another Pod.
Always use a Service to access a Pod.
• User labels to select the pods { app: myapp, tier:
frontend, phase: test, deployment: v3 }.
• Never use :latest tag in the image in the
21-10-2018
production scenario.
6 123
Kubernetes: Namespace 5
Service-Name.Namespace.svc.cluster.local
• Group your Services / Pods / Traffic Rules based on Kubernetes Cluster
Specific Namespace.
• This helps you apply specific Network Policies for default
that Namespace with increase in Security and
Performance.
• Handle specific Resource Allocations for a Kube system
Namespace.
• If you have more than a dozen Microservices then
it’s time to bring in Namespaces. Kube public

$ kubectl config set-context $(kubectl config current-context) --namespace=your-ns


The above command will let you switch the namespace to your namespace (your-ns).
21-10-2018
6 124
Kubernetes: Pod Health Check 6
• Pod Health check is critical to increase the overall
resiliency of the network.
• Readiness
• Liveness
• Ensure that all your Pods have Readiness and
Liveness Probes.
• Choose the Protocol wisely (HTTP, Command &
TCP)
21-10-2018
6 125
Kubernetes: Resource Utilization 7
• For the Best Quality define the requests and limits
for your Pods.
• You can set specific resource requests for a Dev
Namespace to ensure that developers don’t create
pods with a very large resource or a very small
resource.
• Limit Range can be set to ensure that containers
were create with too low resource or too large
resource.
21-10-2018
6 126
Kubernetes: Pod Termination Lifecycle 8
• Make sure that the Application to Handle SIGTERM
message.
• You can use preStop Hook
• Set the terminationGracePeriodSeconds: 60
• Ensure that you clean up the connections or any other
artefacts and ready for clean shutdown of the App
(Microservice).
• If the Container is still running after the grace period,
Kubernetes sends a SIGKILL event to shutdown the Pod.
21-10-2018
6 127
Kubernetes: External Services 9

• There are systems that can be outside the Kubernetes


cluster like
• Databases or
• external services in the cloud.
• You can create an Endpoint with Specific IP Address and
Port with the same name as Service.
• You can create a Service with an External Name (URL)
which does a CNAME redirection at the Kernel level.

21-10-2018
6 128
Kubernetes: Upgrade Cluster 10
• Make sure that the Master behind a Load Balancer.
• Upgrade Master
• Scale up the Node with an extra Node
• Drain the Node and
• Upgrade Node
• Cluster will be running even if the master is not working.
Only Kubectl and any master specific functions will be
down until the master is up.
21-10-2018
129
Source: https://github.com/meta-magic/kubernetes_workshop

Araf Karsh Hamid : Co-Founder / CTO


araf.karsh@metamagic.in
USA: +1 (973) 969-2921
India: +91.999.545.8627
Skype / LinkedIn / Twitter / Slideshare : arafkarsh
http://www.slideshare.net/arafkarsh
https://www.linkedin.com/in/arafkarsh/
http://www.arafkarsh.com/

Вам также может понравиться