Вы находитесь на странице: 1из 152

#CLUS

Deploying Kubernetes with


Cisco ACI

Camillo Rossi – TME DCBU


BRKACI-2505

#CLUS
Session Objectives
• At the end of the session, the participants should be able to:
• Have a general understanding of containers
• Have a general understanding of Kubernetes
• Understand how ACI and Kubernetes integration is deployed
• Initial assumption:
• The audience already has a good knowledge of ACI main concepts
(Tenant, BD, EPG, L2Out, L3Out, etc.)

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 3
Agenda
• ACI-Kubernetes value proposition
• Introduction to Containers
• Container Management and Orchestration
• ACI and Kubernetes Solution Overview
• Demos
• Q&A

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
Cisco Webex Teams
Questions?
Use Cisco Webex Teams to chat
with the speaker after the session

How
1 Find this session in the Cisco Live Mobile App
2 Click “Join the Discussion”
3 Install Webex Teams or go directly to the team space
4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKACI-2505


by the speaker until June 16, 2019.

#CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
ACI Kubernetes Integration – Value proposition
• Allow containers direct access to the ACI policy model, so that they
can participate as first-class citizens within an ACI fabric
• Allow seamless integration of containers, VMs, and physical
devices on an ACI fabric
• Support native policy semantics, so that a container application that
is specified using Kubernetes NetworkPolicy will work correctly out
of the box
• I.E. The same config works on Google Cloud, AWS and ACI
• Leverage fabric resources and Opflex to assist accelerating
Kubernetes service load balancing
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
ACI Kubernetes Integration – Value proposition
• ACI admin can (optionally) define EPGs and contracts that are used
to secure communication from/to/within the Kubernetes cluster
• EPG is selected based on annotation, which can also be used to
dynamically define new EPGs

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 7
Agenda
• ACI-Kubernetes value proposition
• Introduction to Containers
• Container Management and Orchestration
• ACI and Kubernetes Solution Overview
• Demos
• Q&A

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
For your
reference

Linux
fundamentals
Understanding
Containers Origins
Lightweight Process Virtualisation For your
is not new reference

• Create multiple views of the root filesystem to isolate applications


and processes
• Lightweight Process Virtualisation is not a new concept:
• Solaris Zones
• BSD jails
• Linux chroot
• AIX WPARs (Workload Partitions)

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
For your
What is new? reference

• Linux Kernel 3.8 (February 2013) added support for namespaces


and cgroups
• Namespaces are like chroot for processes, but also applied to
network, UTS (Unix Timesharing), mount, IPC and users (UIDs).
• Example: Network namespaces enable the creation of multiple, isolated
routing tables that operate independently. Multiple hostnames could also
be used:
• #ip netns add myns1 -> Create a new Namespace “myns1”

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
What is UTS namespaces allow a single system to appear to have
different host and domain names to different processes For your
new? The PID namespace provides processes with an independent
reference
set of process IDs (PIDs) from other namespaces
Namespaces architecture

Mount namespaces
control mount points

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
IPC namespaces isolate processes from SysV style inter-
For your
What is new?
process communication
reference
Network namespaces virtualise the network stack

Namespaces architecture User namespaces are a feature to


provide both privilege isolation and
user identification segregation
across multiple sets of processes

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 13
What is new?
• Cgroups provide resource management capabilities
• Processes can be grouped into user-defined group of tasks, for
optimised system resource usage
• Cgroups move resource allocation from the process level to the
application level by grouping and labeling processes into
hierarchies
• Resource allocation includes CPU time, block IO, RAM and network
bandwidth

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
What is new?
• Cgroups architecture:

CPU Network Memory Storage I/O

Cgroup1

Cgroup2

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 15
What are
containers?
What is a container?
• A container is a binary executable, packaged with dependencies
and intended for execution in a private namespace with optional
resource constraints.
• This provides the containers multiple isolated operating system
environments with their own file system, network, process and
block I/O space on the same host

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
Compute Virtualisation and Containers

App App App

Bins/ Bins/ Bins/


App App App
VMs Libs Libs Libs
Containers Bins/ Bins/ Bins/
Guest Guest Guest
OS OS OS Libs Libs Libs

Hypervisor Container Engine (Linux)

Host OS Host OS

Server Hardware Server Hardware

Virtualisation Containers
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
Compute Virtualisation and Containers
Similarities

• They provide a way to abstract resources


• They define logical boundaries to the resources they consume
• They enable multiple OS instances to run on the same host
• They share the resources of the host

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Compute Virtualisation and Containers
Differences

• Containers can only run the same OS as the host


• Containers share the same kernel as the host
• Containers are faster to provision and boot
• Containers have lower overhead as there is no need for the
hypervisor layer

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 20
Containers Current Challenges
• Containers images management
• Orchestration of containers across multiple hosts
• Lack of standards
• Integration with virtualisation and cloud tools
• Networking management  Addressed by ACI

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
Why Containers?
Application Architectural Evolution

f()
Service Microservice Function
Autonomous Single Purpose Single Action
Loosely-coupled Stateless Event Sourced
Independently Scalable Ephemeral
Automated

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
An effective platform for micro-services
• Containers are ideal candidates to run micro-services:
• Micro-services define stateless, loosely coupled application components
communicating over API’s, running in different runtime environments.
• Containers meet new application requirements as they provide:
• Density
• Speed
• Portability
• Low overhead management

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 24
Best Practices when using containers
• Containers should be ephemeral
• Are immutable
• Don’t store Data within containers
• Don’t build large images
• Don’t run more than one process per container
• Don’t rely on IP addresses

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
Portability is key
• As containers are portable, they are very relevant in the following
areas:
• Continuous Integration
• Hybrid cloud strategy
• Scale out applications
• Web development

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Containers runtimes
• A container runtime enables users to make effective use of
containerisation mechanisms by providing APIs and tooling that
abstract the low level technical details
• LXC - Open Source Solutions (OSS)
• Docker – OSS and commercial
• Rkt – part of CoreOS, OSS and commercial
• VMware Integrated Container (aka Project Bonneville) - Proprietary
• RunC - OSS
• Garden – part of Pivotal Cloud Foundry, OSS and commercial
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 27
Docker
Docker provides an integrated
technology suite that enables
development and IT operations
teams to build, ship, and run
distributed applications anywhere.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
A Docker containers wrap a piece
of software in a complete
filesystem that contains
everything needed to run: code,
runtime, system tools, system
libraries – anything that can be
installed on a server.
This guarantees that the software
will always run the same,
regardless of its environment.
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
For your
reference

In version 0.9, Docker switched


from LXC to their own execution
driver, called libcontainer.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 31
Docker consists of two main components:

Containers

Docker Engine
– the actual app
running on the host.

Docker Hub – SaaS component for managing


and sharing containers.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
Dockerfile and Registry
• A Dockerfile is simply a text file containing instructions on how to
build a Docker image
• It can add components on top of an existing image
• Images are available online on the Docker hub repository
• Local, private registry can be created.
• A registry is an instance of the registry container image

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 33
Dockerfile and Registry
• A Dockerfile is simply a text file containing instructions on how to
build a Docker image
• It can add components on top of an existing image
• Images are available online on the Docker hub repository
• Local, private registry can be created.
• A registry is an instance of the registry container image

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
Dockerfile and Registry
• A Dockerfile is simply a text file containing instructions on how to
build a Docker image
• It can add components on top of an existing image
• Images are available online on the Docker hub repository
• Local, private registry can be created.
• A registry is an instance of the registry container image

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 35
More about
Docker Images…
Docker Images
• A Docker image is made up of filesystems layered over each other.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 37
Docker Images
• The storage driver is responsible for presenting these layers as a
single, unified file system.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 38
Docker Images
• When you start a container, Docker creates an empty, read-write
layer on top of the stack – all changes are made in this layer.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
Docker Images
• Docker uses “copy-on-write” container layers.
• If a file needs to be modified, it is copied into the read-write layer
first.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
Docker Images
• This means that multiple containers can share a single copy of the
image.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Docker Networking
Option 1 - None

• Doesn’t create any network interface for the container

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
Option 2 - Bridge Mode
• Default mode where Docker attaches containers to Docker0 bridge
• Containers in the same host can talk to each other
• Containers on different host can’t talk to each other (or anything
else) easily

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 44
Option 2 - Bridge Mode
• Requirement to use iptables to NAT the containers IPs
Kubernetes BD/VRF

eth0 eth0
Host-1 iptables iptables Host-2

docker0 Bridge docker0 Bridge


172.17.42.1 172.17.42.1
veth774786d vethde4e22e veth994786d vethab4e22e

Eth0 Eth0 Same IPs for the Eth0 Eth0


172.17.0.12 172.17.0.13 containers is not a typo 172.17.0.12 172.17.0.13

Container 1 Container 2 Container 3 Container 4

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 45
For your
Option 3 - Host Mode reference

• Connect containers to Host network stack


• All the network interfaces defined on the host will be available to
the container (every container will have the same IP address as the
host) Host-1
Eth0 (host interface)
192.168.0.2

Eth0 Eth0
192.168.0.2 192.168.0.2

Container 1 Container 2

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 46
For your
Option 4 - Mapped Container Mode reference

• Container is mapped to another container network stack


• Filesystem, processes and other resources are kept separate
• They share network resources (IP, interfaces)
Host-1
docker0 Bridge
172.17.42.1

veth774786d

Eth0 Eth0
172.17.0.12 172.17.0.12

Container 1 Container 2

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
All these Docker Networking options are complex
• Iptables rules must be created manually to allow/NAT traffic to
containers ports…
• Containers on different hosts can’t communicated to each other
even if they are in the same L2 domain
• Needs to manually manage port-mappings
• Prone to errors

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 48
Docker network driver plugins
• Network plugins can be used to extended Docker networking
support to a wide range of networking technologies, such as
VXLAN, IPVLAN, MACVLAN or something completely different.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
For your
A tale of two standards… reference

• Custom network driver (CNM) • Container network interface (CNI)


• Proposed by CoreOS
• Proposed by Docker
• Plugin-based
• Plugin-based • Multiple runtime (Docker, LXC etc..)
• Supports Only Docker • Containers con join 1 or more networks
• Containers con join 1 or more networks • Supports namespace isolation
• Supports namespace isolation • Integrates with IPAM
• Simple
• Integrates with IPAM
• Complex

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 50
For your
Kubernetes choose… reference

• CNI
• Would you like to know more?
• http://blog.kubernetes.io/2016/01/why-Kubernetes-doesnt-use-
libnetwork.html

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 51
Containers
Orchestrators
In a multi-host environment, containers need to
• Have network reachability
• Be fault-tolerant
• Easily scalable
• Use resources optimally
• Can discover other containers/application automatically
• Communicate with each other
• Can be update/rollback without any downtime
• Expose services in an easy and reliable way

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 53
Container Orchestration basic features
• Bring multiple hosts together and make them part of a cluster
• Schedule containers to run on different hosts
• Help containers running on one host reach out to containers
running on other hosts in the cluster
• Bind containers and storage
• Bind containers of similar type to a higher-level construct, like
services, so we don't have to deal with individual containers
• Keep resource usage in-check, and optimize it when necessary
• Allow secure access to applications running inside containers.
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 54
Containers Orchestrators
• Docker Swarm is a Container Orchestrator provided by Docker, Inc.
It is part of Docker Engine.
• Kubernetes started by Google, now part of the Cloud Native
Computing Foundation project.
• Mesos Marathon is one of the frameworks to run containers at
scale on Apache Mesos.
• Amazon EC2 Container Service (ECS) is a hosted service provided
by AWS to run Docker containers at scale on its infrastructure.
• Hashicorp Nomad is the Container Orchestrator provided
by HashiCorp.
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 55
Kubernetes
• Kubernetes is an open source Container Orchestration system for
automating deployment, scaling and management of containerised
applications.
• It was inspired by the Google Borg System and with its v1.0 release
in July 2015, Google donated it to the Cloud Native Computing
Foundation (CNCF).
• Generally, Kubernetes has new releases every three months. The
current stable version is 1.11 (as of Jan 2019).

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 56
Kubernetes and Docker
• Kubernetes uses Docker to execute/run the containers
• Kubernetes adds, on top of Docker, all the intelligence and features
of an orchestrator

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 57
Kubernetes Features
• Automatic binpacking
Kubernetes automatically schedules the containers based on
resource usage and constraints, without sacrificing availability.
• Self-healing
Kubernetes automatically replaces and reschedules the containers
from failed nodes. It also kills and restarts containers which do not
respond to health checks, based on existing rules/policy.
• Horizontal scaling
Kubernetes can automatically scale applications based on resource
usage like CPU and memory. It also supports dynamic scaling
based on customer metrics
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 58
Kubernetes Features (cont.)
• Service discovery and Load balancing
Kubernetes groups sets of containers and refers to them via a DNS
name. This DNS name is also called a Kubernetes service.
Kubernetes can discover these services automatically, and load-
balance requests between containers of a given service.
• Automated rollouts and rollbacks
Kubernetes can roll out and roll back new versions/configurations of
an application, without introducing any downtime.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 59
Kubernetes Features (cont.)
• Secrets and configuration management
Kubernetes can manage secrets and configuration details for an
application without re-building the respective images. With secrets,
we can share confidential information to our application without
exposing it to the stack configuration, like on GitHub.
• Storage orchestration
With Kubernetes and its plugins, we can automatically mount local,
external, and storage solutions to the containers in a seamless
manner, based on Software Defined Storage (SDS).
• Batch execution
Besides long running jobs, Kubernetes also supports batch execution.
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 60
Kubernetes Architecture
• At a very high level, Kubernetes has the following main
components:
• One or more Master Nodes
• One or more Worker Nodes
• Distributed key-value store, like etcd.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 61
For your
Kubernetes – etcd reference

• Kubernetes uses etcd to store the cluster state. etcd is a


distributed key-value store based on the Raft Consensus
Algorithm. Raft allows a collection of machines to work as
a coherent group that can survive the failures of some of
its members. At any given time, one of the nodes in the
group will be the Master, and the rest of them will be the
Followers. Any node can be treated as a Master.
• etcd is written in the Go programming language. In
Kubernetes, besides storing the cluster state, etcd is also
used to store configuration details such as subnets,
ConfigMaps, Secrets, etc.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 62
Kubernetes Components – Master Node
• The Master Node is responsible for managing the Kubernetes
cluster. Master node access methods are CLI, GUI or APIs.
• For fault tolerance, there can be more than one Master Node.
• To manage the cluster state, Kubernetes uses etcd, and all
Master Nodes connect to it. etcd is a distributed key-value
store. The key-value store can be part of the Master Node. It
can also be configured externally, in which case, the Master
Nodes connect to it.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 63
Kubernetes Components – Worker Node
• A Worker Node is a machine (VM, physical
server, etc.) which runs the containers using
pods and is controlled by the Master Node.
• pods are scheduled on the Worker Nodes

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 64
Kubernetes - pod
• A pod is the scheduling unit in Kubernetes. It is a
logical collection of one or more containers which are
always scheduled together.
• The set of containers composed together in a pod
share an IP.
[root@k8s-01-p1 ~]# kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
aci-containers-controller-1201600828-qsw5g 1/1 Running 1 69d
aci-containers-host-lt9kl 3/3 Running 0 72d
aci-containers-host-xnwkr 3/3 Running 0 58d
aci-containers-openvswitch-0rjbw 1/1 Running 0 58d
aci-containers-openvswitch-7j1h5 1/1 Running 0 72d

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 65
Kubernetes – Deployment
• Deployments are a collection of pods providing the same service
• You describe the desired state in a Deployment object, and the
Deployment controller will change the actual state to the desired
state at a controlled rate for you
• For example you can create a deployment that declare you need to
have 2 copies of your front-end pod.
[root@k8s-01-p1 ~]# kubectl get deployment --namespace=kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
aci-containers-controller 1 1 1 1 72d

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 66
Kubernetes – Services
• A service tells the rest of the Kubernetes environment (including other
pods and Deployments) what services your application provides.
• While pods come and go, the service IP addresses and ports remain the
same.
• Kubernetes automatically load balance the load across the replicas in the
deployment that you expose through a Service
• Other applications can find your service through Kubernetes service
discovery.
• Every time a service is create a DNS entry is added to kube-dns
[root@k8s-01-p1 ~]# kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 11.96.0.10 <none> 53/UDP,53/TCP 72d

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 67
Kubernetes – External Services
• If there are external IPs that route to one or more cluster nodes,
Kubernetes services can be exposed on those external IPs.
• Traffic that ingresses into the cluster with the external IP (as
destination IP), on the service port, will be routed to one of the
service endpoints.
• External IPs are not managed by Kubernetes and are the
responsibility of the cluster administrator.
[root@k8s-01-p1 ~]# kubectl get svc front-end --namespace=guest-book
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end 11.96.0.33 11.3.0.2 80:30002/TCP 3m

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 68
Kubernetes – Ingress
• An Ingress is a collection of rules that allow inbound connections to
reach the cluster services.
• It can be configured to give services externally-reachable URLs,
load balance traffic, terminate SSL, offer name based virtual
hosting, and more
• Think of NGINX
[root@k8s-01-p1 ~]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
test-ingress * 80 7s

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 69
Kubernetes - Labels
• Kubernetes uses labels as “nametags” to identify things.
• Can be used to indicate roles, stability, or other important
attributes.
• You can query anything in Kubernetes via a label.
• i.e. Return all the pod that are running “PreProduction” workload

[root@k8s-01-p1 ~]# kubectl get pod --namespace=kube-system -l component=kube-apiserver


NAME READY STATUS RESTARTS AGE
kube-apiserver-k8s-01-p1 1/1 Running 0 72d

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 70
Kubernetes - Annotations
• Similar to labels but are NOT used to identify and select object
• Used in ACI, yes soon we will be speaking about ACI and
Kubernetes 
[root@k8s-01-p1 ~]# kubectl describe node k8s-01-p1 | more
Name: k8s-01-p1
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=k8s-01-p1
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
opflex.cisco.com/pod-network-ranges={"V4":[{"start":"11.2.0.130","end":"11.2.1.1"}]}
opflex.cisco.com/service-endpoint={"mac":"66:85:9a:e9:ef:2f","ipv4":"11.5.0.3"}
volumes.kubernetes.io/controller-managed-attach-detach=true

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 71
Kubernetes – Namespace
• Groups everything together:
• Pod
• Deployment
• Volumes
• Services
• Etc…

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 72
All Together: A K8S Cluster
Deployment1 Namespace
pod1
Node1 Container
Application
A node can be part of
Several Namespaces
pod2 Service
Node2 Container 1.1.1.1:80
Application

pod[n]
Node[N] Container
Application

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 73
ACI and
Kubernetes
Solution Overview
Application Centric Infrastructure
Any Application – Any hypervisor

• Build to and support open systems


and standards
• Common pervasive gateway and
policy based routing provide optimal
network connectivity
• Policy consistency provides for
containers running reliably and
securely
• Ease of deploying, scaling and
managing

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 75
Cisco ACI and Container Integration
ACI and Containers
Unified networking: Containers, VMs, and
bare-metal

Micro-services load balancing integrated in


fabric for HA / performance

Secure multi-tenancy and seamless


integration of Kubernetes network policies
and ACI policies
OpFlex OVS OpFlex OVS
Visibility: Live statistics in APIC per
Node Node container and health metrics

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 76
Deploying Kubernetes Clusters to ACI Fabrics
• Kubernetes Clusters are deployed to
an ACI Tenant (existing or new)
EXT
• ACI CNI installer will create Bridge
Domains and EPGs for Kubernetes
node and POD subnets on a given
VRF (in tenant or common) Tenant: AcmeTenant
VRF:
(in tenant or from common)
• The ACI CNI installer will also create APP DB APP DB
the VMM Domain and other relevant
VRF

objects APP DB APP DB

K8s Cluster-01 K8s Cluster-02


• ACI supports multiple clusters per
fabric.

• The ACI CNI plugin is compatible with


Multi-POD
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 77
ACI Network Plugin for Kubernetes
Native Security Policy Support
Developer Network Administrator

Infosec
1 1 Fabric Bring Up
1 Build containers
Define Container
Define BDs, Context and
Network Policy
2 AP
2 Deploy/Scale Clusters

` Get VLAN Pools Allocated


3
3 Annotate policy For Each EPG
EPG
Full Infrastructure Visibility,
4 Telemetry
Opflex/OVS
Infrastructure Policy
Enforcement
Host level Policy
Enforcement WEB APP WEB APP DB

Server 1 Server 2
ACI Fabric

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 78
ACI VMM Domain for Kubernetes
Network Policy
Kubernetes Technical Description
• Network policies of Kubernetes supported using standard
upstream format but enforced through OpFlex / OVS using
ACI Policies APIC Host Protection Profiles
• Kubernetes app configurations can be moved without
modification to/from ACI and non-ACI environments
• Embedded fabric and virtual switch load balancing
• PBR in fabric for external service load balancing
• OVS used for internal service load balancing

OpFlex OVS OpFlex OVS


• VMM Domain for Kubernetes
• Stats per namespace, deployment, service, pod
Node Node
• Physical to container correlation

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 79
ACI CNI Plugin Components
• aci-containers-controller
• Handle IPAM
• Management of endpoint state
• Policy Mapping (annotations)
• Controls Load Balancing
• Pushes configurations into the APIC

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 80
ACI CNI Plugin Components
• aci-containers-host is a DaemonSet composed of 3 containers:
• mcast-daemon:
• Handles Broadcast, unknown unicast and multicast replication
• aci-containers-host:
• Endpoint metadata
• Pod IP Address management
• Container Interface Configuration
• opflex-agent:
• Support for Stateful Security Groups
• Manage configuration of OVS
• Render policy to openflow rules to program OVS.
• Handles loadbalanced services (connection tracking, natting, etc…)
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 81
ACI CNI Plugin Components
• aci-containers-openvswitch
• Bridge traffic from containers to physical interfaces
• Enforce policies

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 82
ACI and
Kubernetes
Security Model
Support for Network Policy in ACI namespace-a

namespace-b
• Specification of how selections of pods are allowed to
communicate with each other and other network endpoints.
• Network namespace isolation using defined labels
• directional: allowed ingress pod-to-pod traffic
• filters traffic from pods in other projects
• can specify protocol and ports (e.g. tcp/80)

Policy applied to namespace: namespace-a


kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: allow-red-to-blue-same-ns
spec:
podSelector:
matchLabels:
type: blue
ingress:
- from:
- podSelector:
matchLabels:
type: red

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 84
Mapping Network Policy and EPGs
Cluster Isolation Namespace Isolation Deployment Isolation

Single EPG for entire cluster.


Each namespace is mapped to its own Each deployment mapped to an EPG
(Default behavior) EPG. Contracts tightly control service traffic
No need for any internal contracts. Contracts for inter-namespace traffic.

Key Map EPG NetworkPolicy Contract

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 85
Dual level Policy Enforcement by ACI

Native API Default deny all traffic


apiVersion: networking.k8s.io/v1
Both Kubernetes Network Policy and kind: NetworkPolicy
metadata:
ACI Contracts are enforced in the name: default-deny

Linux kernel of every server node


spec: podSelector: {}
policyTypes:

that containers run on.


- Ingress
- Egress

Containers are mapped to EPGs and


contracts between EPGs are also
enforced on all switches in the fabric
where applicable.
Both policy mechanisms can be used in conjunction.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 86
ACI Basic
Configuration
Connecting Bare Metal Nodes to the Fabric
• Virtual Port Channels enable simple and Kubernetes Node 01
fast link redundancy for Kubernetes bare
metal nodes
bond0
• Can use standard based LACP between enp8s0 enp9s0

K8s nodes and leaf pair for optimal load vPC Policy
balancing and link-failure convergence eth1/10 eth1/10
Group

• vPC Policy Group per K8s node server


ACI Leaf1 ACI Leaf2

• AEP per K8s Cluster


• Enable infraVLAN on the AEP. k8s-node-vPC-PolicyGroup AEP-k8s-cluster-01

Make sure InfraVLAN


is enabled for AEP

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 88
Connecting VM Nodes to the Fabric
• As of ACI 3.1 it is supported to also run the ACI CNI Kubernetes Node 01

plugin for clusters running on vSphere VMs


ens160
• The VMs should be running on a VMware VMM
Domain. ESXi-node VDS

vPC Policy
• Prior to installation, ensure the infraVLAN is Group
activated for the AEP used for the ESXi hosts. eth1/10 eth1/10

• The VMs will be connected to a PortGroup that will ACI Leaf1 ACI Leaf2

be created by the ACI CNI installer tool.


esxi-node-vPC-PolicyGroup AEP-VDS-ESX-CLUSTER

Make sure InfraVLAN


is enabled for AEP

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 89
Kubernetes Nodes will require the following
minimum interfaces
• InfraVLAN – sub-interface over which we build the opflex channel
• Node IP – sub-interface used for the Kubernetes API host IP
address
• (Optional) OOB Management – sub-interface or physical interface
used optionally for OOB access.

IMPORTANT: the default route must be on the Node IP interface.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 90
acc-provision
• ACI Container Controller Provision:
• Takes a YAML file containing the parameters of your configuration
• Generates and pushes most of the ACI config
• Generates Kubernetes ACI CNI containers configuration
acc-provision --flavor=kubernetes-1.13 -a -u admin -p pass –c config.yml –o cni_conf.yml

Used to select if we are deploying


APIC user and Configuration file Output file for ACI CNI
kubernetes 1.6, 1.7 or OpenShift
password config
3.6

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 91
acc-provision – configuration file (1)
aci_config:
system_id: KubeSpray # Tenant Name and Controller Domain Name
apic_hosts: # List of APIC hosts to connect for APIC API
- 10.67.185.102
vmm_domain: # Kubernetes VMM domain configuration
encap_type: vxlan # Encap mode: vxlan or vlan
mcast_range: # mcast range for BUM replication
start: 225.22.1.1
end: 225.22.255.255
mcast_fabric: 225.1.2.4
nested_inside: # (OPTIONAL) If running k8s node as VMs specify the VMM Type and Name.
type: vmware # Only vmware for now, ports groups created automatically with system_id name
name: ACI

# The following resources must already exist on the APIC,


# they are used, but not created by the provisioning tool.
aep: ACI_AttEntityP # The AEP for ports/VPCs used by this cluster
vrf: # The VRF can be placed in the same Tenant or in Common.
name: vrf1
tenant: KubeSpray # This can be the system-id or common
l3out:
name: l3out # Used to provision external IPs
external_networks:
- default_extepg # Default Ext EPG, used for PBR redirection

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 92
acc-provision – configuration file (2)
#
# Networks used by Kubernetes
#
net_config:
node_subnet: 10.32.0.1/16 # Subnet to use for nodes
pod_subnet: 10.33.0.1/16 # Subnet to use for Kubernetes Pods
extern_dynamic: 10.34.0.1/24 # Subnet to use for dynamic external IPs
extern_static: 10.35.0.1/24 # Subnet to use for static external IPs
node_svc_subnet: 10.36.0.1/24 # Subnet to use for service graph
kubeapi_vlan: 4011 # The VLAN used by for nodes to node API communications
service_vlan: 4013 # The VLAN used by LoadBalancer services
infra_vlan: 3456 # The ACI infra VLAN used to establish the OpFlex tunnel with the leaf

#
# Configuration for container registry
# Update if a custom container registry has been setup
#
registry:
image_prefix: noiro # DO NOT CHANGE

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 93
acc-provision acc-provision creates tenant EPGs
for nodes and Pods
Within the tenant selected the
provisioning tool creates a ‘Kubernetes’
Application Profile with three EPGs:
• ‘kube-nodes’: for node interfaces,
mapped to PhysDom
• ‘kube-system’: for system PODs (i.e.
kube-dns), mapped to VMMDom
• ‘kube-default’: base EPG for all
containers on any namespace by
default, mapped to VMMDom

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 94
acc-provision creates tenant BDs for nodes and
Pods
• kube-nodes-bd:
• Only used for kube-node EPG
• Maps to node_subnet

• kube-pod-bd:
• Any pod will be assigned an IP from this BD
Subnet
• Used for kube-default, kube-system and any
other user defined POD EPGs.
• Maps to pod_subnet

• Cluster…-service:
• BD for PBR/SG services
• Created when ACI CNI plugin is deployed

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 95
acc-provision configures the required tenant
contracts
• The required minimum set of
contracts are automatically
configured to ensure basic cluster
functionality
• DNS
• Health-check
• ICMP
• Kube-API

• Administrator can define additional


contracts if/when required

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 96
ACI Fabric Configuration – L4-L7 Devices
• Created once the ACI CNI plugin is deployed
• Dynamically updated if nodes are added or removed from the k8s cluster
• Service Graph Template: Specify a template for PBR redirection

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 97
ACI Fabric Configuration – Required L3OUT
• Fabric Administrator must create and
configure the L3OUT that will be used to
expose external services
• The L3OUT name and the Default
Networks name must match those in
acc-provision configuration
• The External EPG used must be
associated with the contract that allows
nodes to reach repos, registry, etc.
The nodes will probably require a
contract to the L3Out (not
provisioned) to reach repos,
registry, etc.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 98
Once all this is done, the cluster is ready for
installing Kubernetes

Kubernetes Node-N
REPOS
Kubernetes Node-02

Kubernetes Master Bond0.4001


Kubernetes Node-01 Bond0.4093 L3Out
11.1.0.101/16 10.1.80.67
Bond0.4001 Bond0.4093
11.1.0.102/16 10.1.80.68 External
Bond0.4001 Bond0.4093 Bond0.4001 Bond0.4093 EPG
11.1.0.100/16 10.1.80.66 11.1.0.101/16 10.1.80.67

kube-node-EPG Contract
(mapped to AcmeTenant-pdom, vlan4001)

kube-node-bd 11.1.0.1/16, 00:22:bd:f8:19:ff

The contract to the External EPG


EPG – access|default on the L3Out is important to
(vlan 4093) ensure access to repositories.
BD default, VRF overlay-1

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 99
ACI Fabric Configuration – Container Domain

Fabric admin can search


APIC for k8s nodes, masters,
pods, services …

View pods per node, map to


encapsulation, physical point
APIC keeps inventory of pods in the fabric.
and their metadata (labels,
annotations), deployments,
replicasets, etc.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 100
Demo 1
Deploying an Application
Demo 1 – guestbook application
For your
reference

• The guestbook application uses Redis to store its data. It writes its
data to a Redis master instance and reads data from multiple Redis
slave instances.
• The code can be found at:
https://kubernetes.io/docs/tutorials/stateless-
application/guestbook/

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 102
Demo 1 –guestbook application
For your
reference

Create a namespace for our application


cisco@k8s-01:~/demo/guestbook1$ kubectl create namespace guestbook

Deploy all the components


cisco@k8s-01:~/demo/guestbook1$ kubectl --namespace=guestbook apply -f complete.yaml
deployment "frontend" created
service "frontend" created
deployment "redis-master" created
service "redis-master" created
deployment "redis-slave" created
service "redis-slave" created

Check POD status


cisco@k8s-01:~/demo/guestbook1$ kubectl --namespace=guestbook get pod -o wide
READY STATUS RESTARTS AGE IP NODE
frontend-1768566195-mj43h 1/1 Running 0 2m 10.33.1.11 k8s-02
frontend-1768566195-tpw75 1/1 Running 0 2m 10.33.0.153 k8s-03
frontend-1768566195-vljrh 1/1 Running 0 2m 10.33.0.155 k8s-03
redis-master-2365125485-8hg60 1/1 Running 0 2m 10.33.0.152 k8s-03
redis-slave-3837281623-p4fs7 1/1 Running 0 2m 10.33.1.12 k8s-02
redis-slave-3837281623-qw894 1/1 Running 0 2m 10.33.0.154 k8s-03
Note: All the commands are executed from the Kubernetes master node

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 103
For your
Demo 1 – Check APIC Controller Domain reference

Visibility on where this specific


pod is running

APIC has complete visibility into


k8s objects

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 104
For your
Demo 1 – Check APIC EPG reference

POD Name

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 105
For your
Demo 1 – Cluster Services reference

• By default every POD is exposed only to the k8s cluster via a


Service IP. You can imagine this as a Virtual IP of a load balancer.
• With the ACI CNI plugin the LoadBalancing for the internal cluster
services is performed by OVS
cisco@k8s-01:~/demo/guestbook1$ kubectl --namespace=guestbook get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.37.0.124 <nodes> 80:32677/TCP 25m
redis-master 10.37.0.162 <none> 6379/TCP 25m
redis-slave 10.37.0.136 <none> 6379/TCP 25m

• Try to access the service from one of the nodes


cisco@k8s-01:~/demo/guestbook1$ curl 10.37.0.124
<html ng-app="redis">
<head>
<title>Guestbook</title>
!SNIP!

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 106
Demo 2
Placing PODs/Namespaces into EPGs
For your
Demo 2 - APIC Steps reference

• Create an EPG under your application


• BD = your pod BD
• VMM Domain = Your Kubernetes Domain

• Every POD in an EPG needs to be able to communicate with:


• kube-system for cluster wide DNS resolution
• kube-node for health monitoring probes
• Top Tip: Use EPG contract masters and inherit contracts from kube-
default!

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 108
For your
Demo 2 - acikubectl reference

• Utility to manage and troubleshoot the k8s cluster


• Can be used to annotate Namespeces or Deployments with the
Tenant/App/EPG names

cisco@k8s-01:~/demo/guestbook1$ acikubectl set default-eg namespace guestbook -t KubeSpray -a kubernetes -g guestbook


Setting default endpoint group:
Endpoint Group:
Tenant: KubeSpray
App profile: kubernetes
Endpoint group: guestbook

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 109
For your
Check under your EPG reference

• All your PODs should now have moved from kube-default to


guestbook

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 110
Exposing Services
The extern_dynamic subnet
• Defined in acc-provision configuration file
• An IP address will be automatically selected from this subnet to
expose your service outside of the k8s cluster/fabric
• Expose the service as “LoadBalancer” (as per kubernetes standard)
• The extern_dynamic subnet is not associated to a BD: You need to
configure your external router with static routes toward your L3OUT
for this subnet
cisco@k8s-01:~/demo/guestbook1$ kubectl --namespace=guestbook get svc frontend
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.37.0.124 10.34.0.5 80:32677/TCP 5h

extern_dynamic

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 112
Service Graphs and PBR
Every time a service is exposed the ACI CNI controller will deploy:
• An External EPG with a /32 match for the Service IP
• A new contract between the svc_ExtEPG and the default_ExtEPG*
• A Service Graph with PBR redirection containing every node where an exposed
POD is running

Client RTR L3Out

default_ExtEpg Cons Pod1


0.0.0.0/0 OVS Node1
Pod2

Node2 Pod3
Contract PBR Service Graph OVS
Pod4
Svc_x_ExtEPG
OVS
Pod5
10.34.0.5/32
NodeN
Prov NodeN

* defined in the acc-provision config file


#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 113
Service Graphs and PBR – Packet walk
1. Client send a request to 10.34.0.5, ACI performs Longest Prefix Match (LPM)
on the SIP and classify the traffic in the default_extEPG
2. ACI does a routing lookup for 10.34.0.5, IP does not exist in the fabric, we
should route it out however LPM places it in the Svc_x_ExtEPG
3. PBR redirection is triggered and the traffic is LoadBalanced by the fabric to
one of the nodes
SIP DIP

192.168.1.100 10.34.0.5

Client RTR L3Out

default_extEpg Cons Pod1


0.0.0.0/0 OVS Node1
Pod2

Node2 Pod3
Contract PBR Service Graph OVS
Pod4
Svc_x_ExtEPG
OVS
Pod5
10.34.0.5/32
NodeN
Prov NodeN
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 114
Service Graphs and PBR – Packet walk
1. Client send a request to 10.34.0.5, ACI performs Longest Prefix Match (LPM)
on the SIP and classify the traffic in the default_extEPG
2. ACI does a routing lookup for 10.34.0.5, IP does not exist in the fabric, we
should route it out however LPM places it in the Svc_x_ExtEPG
3. PBR redirection is triggered and the traffic is LoadBalanced by the fabric to
one of the nodes

SIP DIP
Client RTR L3Out
192.168.1.100 10.34.0.5

default_extEpg Cons Pod1


0.0.0.0/0 OVS Node1
Pod2

Node2 Pod3
Contract PBR Service Graph OVS
Pod4
Svc_x_ExtEPG
OVS
Pod5
10.34.0.5/32
NodeN
Prov NodeN
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 115
Service Graphs and PBR – Packet walk
4. The K8S node is not expecting any traffic directed to the external service IP so
OVS will perform NAT as required
5. If there are multiple POD on a single node OVS will perform a second stage
LB to distribute the load between Pods running on the same node

SIP DIP
192.168.1.100 10.34.0.5

SIP DIP

Client RTR L3Out 192.168.1.100 PodX IP

default_extEpg Cons Pod1


0.0.0.0/0 OVS Node1
Pod2

Node2 Pod3
Contract PBR Service Graphc OVS
Pod4
Svc_x_ExtEPG
OVS
Pod5
10.34.0.5/32
NodeN
Prov NodeN
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 116
Service Graphs and PBR – Packet walk
4. PodX replies to the client
5. OVS restore the original external Service IP
6. PBR redirection is not triggered since the source EPG is the Shadow EPG of
the PBR node
7. Traffic is routed back to the client (and is permitted by the contract)

DIP SIP
Client RTR L3Out
192.168.1.100 PodX IP
DIP SIP
default_extEpg Cons Pod1
OVS Node1
0.0.0.0/0 192.168.1.100 10.34.0.5 Pod2

Node2 Pod3
Contract PBR Service Graph OVS
Pod4
Svc_x_ExtEPG
OVS
Pod5
10.34.0.5/32
NodeN
Prov NodeN
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 117
Demo 3
Exposing Services
Exposing a service
• Simply choose the LoadBalancer ”type” in the service definition
• The ACI CNI plug in will:
• Automatically pick a free IP from the extern_dynamic subnet
• Create the ExtEPG
• Create contracts
• Create PBR redirection rules
• Deploy the service graph

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 119
Scalability
• Currently the scalability of exposing external service with PBR is
limited by the number of external EPGs per L3OUT.
• ACI 4.0 supports 250 external EPGs per L3 OUT per leaf *
• This is a soft limit and will increase with time
• But we want more! So?

*For details check:


https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/4-x/verified-scalability/Cisco-ACI-Verified-Scalability-Guide-401.html

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 120
Scaling External
Services with
Ingress
Kubernetes – Ingress
• Composed of two parts:
• Ingress Resources: collection of rules that defines how inbound
connections can reach the internal cluster services.
• Ingress controller: responsible for fulfilling the Ingress, usually with a
virtual loadbalancer (nginx, ha-proxy)
• Ingress controller can be shared between multiple namespaces
• It can be configured to give services externally-reachable URLs,
load balance traffic, terminate SSL, offer name based virtual hosting
etc…
• Easy integration with DNS: configure a wildcard DNS record
(*.camillo.com) pointing to the IP of the ingress controller
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 122
Kubernetes – Ingress
Ingress Ingress Resource
Controller (I am app1)

Namespace
app1
Namespace Pod1
ingress Service
Pod2
app1.camillo.com Ingress Cont
Ingress Cont 11
Client
app2.camillo.com
IngressCont
Ingress Cont 22
Namespace
IngressCont
Ingress Cont N
N app2
Pod1
Service
Pod2

Ingress Resource
(I am app2)
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 123
ACI and Kubernetes Ingress
• Expose the Ingress Controller via Service Graph with PBR
• A single Service Graph/ExtEPG can now host as many services as
we want
• Ingress Controller can be scaled as needed
• If you create a dedicated EPG for ingress you need the following
contracts:
• All the contracts used in kube-default (remember contract inheritance)
• Consume: Kube-API, Ingress need to be able to speak with the Kube API
server
• Consume: any required ports between Ingress Controller and the service
you wan to expose
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 124
ACI and Kubernetes Ingress Ingress Resource
(I am app1)
Ingress
Controller

Client
Namespace
app1
Namespace Pod1
RTR L3Out
ingress Service
default_extEpg
Pod2
Cons
0.0.0.0/0 Ingress Cont
Ingress
Ingress Cont
Cont111

Contract PBR Service Graph IngressCont


Ingress
Ingress Cont 22
Cont 2
Namespace
Ingress_ExtEPG
10.34.0.6/32
IngressCont
Ingress
Ingress Cont N
Cont N app2
Prov
Pod1
Service
Pod2

Ingress Resource
(I am app2)
#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 125
ACI and Kubernetes External Services -
Summary
• Two options (can be used at the same time even for the same
service)
• Exposing services via ingress
• Exposing up to 250 services directly with Service Graph with PBR

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 126
Demo 4
Exposing Services with Ingress
Container to
Non-Container
Communications
Container to Non-Container Communications
• In production environments is preferred, for example, to run
services like high performance databases as VMs or Bare
Metal Servers
• This calls for the ability to easily provide communication
between K8S POD and VMs/Bare Metal
• Simply deploy a contract between your EPGs, ACI will do
the rest!
• This works for any VMM domain and Physical Domains, for
example you can have a Container Domain using VXLAN
speaking with a Microsoft SCVMM Domain using VLAN.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 129
Container to Non-Container Communications:
F5 Integration with ACI CNI
F5 As an Ingress LoadBalancer
• It is possible to use a Physical or Virtual BIG-IP to expose your
services
• Runs a k8s-bigip-ctlr to send config to the BIG-IP
• Two mode of operation:
• LoadBalance the traffic directly between the PODs: the BIG-IP needs
direct connectivity to the POD subnet.
• LoadBalance the traffic to the NodePort of the service that you are
exposing, this is suboptimal as all the nodes are added in the LB pool so
you can send traffic to nodes where no PODs are present and you have
to trombone the traffic around the cluster

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 131
ACI – F5 Design Booth 1323

Contract Contract
Clients BIG-IP EPG POD EPG

VLAN-10 VXLAN-123456

BIG-IP LB POD-1 POD-2 POD-n

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 132
Demo 5
F5 and ACI CNI
What about
Service Meshes?
(ISTIO)
ISTO is Transparent
• The sidecar proxy sits inside the original application POD and is completely transparent to
our CNI plugin as per the picture below.

#CLUS © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 135
Kubernetes Cluster
Node Failure
Kubernetes Cluster Node Failure Detection
• Kubernetes Monitors by default all the node in the clusters
• Depending on the configuration, node failure detection and
container restart can take from ~50s to 5min. This will depend on
your specific configuration.
• Once a node is detected as NotReady (failed) the aci-container-
controller will update the ACI configuration as required i.e. a failed
node will be removed from the PBR redirection policy

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 137
ACI CNI redundancy during node failure
aci-containers-host and aci-containers-openvswitch
• DataPlane of the CNI Plugin
• Start and Stop with the Node
• If isolated from the network they will try to reconnect to the leaf

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 138
ACI CNI redundancy during node failure
aci-containers-controller (acc)
• Stateless
• Does not sit in the data-path
• In case of failure k8s will restart it on a different node

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 139
ACI CNI redundancy
common configuration mistake
aci-containers-controller
• Node connects to OOB and ACI Fabric
• K8S Cluster communications are happening over the ACI Fabric
• aci-containers-controller communicates with APIC via OOB

ACI
Fabric

aci-containers-controller1 Node1 Node2

OOB

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 140
ACI CNI redundancy
common configuration mistake
aci-containers-controller
• Node 1 losses connectivity with the ACI Fabric (interface down)
• K8S master will detect node1 as lost and restart acc on Node2
• The old instance of acc1 is still running and will keep injecting the
old config, overwriting the configuration changes pushed by acc2
• When designing your network ensure that acc communication with
the APIC goes trough the fabric ACI
Fabric

aci-containers-controller1 Node1 Node2 aci-containers-controller2

OOB

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 141
What is coming
ACI 4.2 Upcoming Features
• POD IP Source NAT
• Handled directly by OVS
• Annotation based, can be enabled at Namespace, Deployment or POD
level.
• Docker EE Support

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 143
How can I build my
own lab?
Cisco Container Platform
Native Kubernetes (100% Upstream)
Direct updates and best practices from open source community

Hybrid Cloud Optimized


E.g.: Google, AWS, …

Integrated
Networking | Management | Security | Analytics

Turnkey Solution
For Production-Grade Container Flexible Deployment Model
Environments VM | Bare metal  HX, UCS, ACI | Public cloud

Easy to acquire, deploy and manage | Open and consistent | Extensible platform | World-class advisory and support

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 145
Not officially
aci_kubeadm supported

• Set of ansible scrips to deploy an single master cluster using ACI


CNI plugin
• Open Source (not supported by TAC/Cisco etc…)
• Optionally can clone VM templates and configure everything
providing a 1-Click deployment solution for your lab
• https://github.com/camrossi/aci_kubeadm
Yes it is me… Did I
mentioned is not
officially supported? 

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 146
Not officially
kubespray_aci supported

• Set of ansible scrips to deploy an Multi-master cluster using ACI


CNI plugin
• Open Source (not supported by TAC/Cisco etc…)
• https://github.com/camrossi/kubespray_aci

Still me, still not


officially supported? 

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 147
Complete your
online session • Please complete your session survey
evaluation after each session. Your feedback
is very important.
• Complete a minimum of 4 session
surveys and the Overall Conference
survey (starting on Thursday) to
receive your Cisco Live water bottle.
• All surveys can be taken in the Cisco Live
Mobile App or by logging in to the Session
Catalog on ciscolive.cisco.com/us.
Cisco Live sessions will be available for viewing
on demand after the event at ciscolive.cisco.com.

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 148
Continue your education

Demos in the
Walk-in labs
Cisco campus

Meet the engineer


Related sessions
1:1 meetings

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 149
Related Session
• DEVNET-2617: ACI CNI integrated with a CI/CD Pipeline
• LABACI-2010: ACI Runs Everything, ACI CNI, SCVMM, VMWare

#CLUS BRKACI-2505 © 2019 Cisco and/or its affiliates. All rights reserved. Cisco Public 150
Thank you

#CLUS
#CLUS

Вам также может понравиться