Вы находитесь на странице: 1из 48

What’s New in Red Hat

OpenShift Container Platform 3.9

OpenShift Commons Briefing


21 March 2018

Marc Curry
Steve Speicher
OpenShift Product Management Team
OpenShift = Enterprise Kubernetes+
Build, Deploy and Manage Containerized Apps
CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER

SELF-SERVICE

SERVICE CATALOG
(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)

BUILD AUTOMATION DEPLOYMENT AUTOMATION

APPLICATION LIFECYCLE MANAGEMENT


(CI / CD)

CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT


(KUBERNETES)

LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS

INFRASTRUCTURE AUTOMATION & COCKPIT

OCI CONTAINER RUNTIME & PACKAGING

ATOMIC HOST /
RED HAT ENTERPRISE LINUX
OpenShift Roadmap
OpenShift Container Platform 3.6 (August) OpenShift Container Platform 3.9 (March)
● Kubernetes 1.6 & Docker 1.12 ● Kubernetes 1.8 and 1.9 and docker 1.13
● New Application Services - 3Scale API Mgt ● CloudForms CM-Ops (CloudForms 4.6)
OnPrem, SCL 2.4 ● CRI-O (Full Support in z stream)
● Web UX Project Overview enhancements ● Device Manager (Tech Preview)
● Service Catalog/Broker & UX (Tech Preview) ● Central Auditing
● Ansible Service Broker (Tech Preview) ● Jenkins Improvements
● Secrets Encryption (3.6.1) ● HAProxy 1.8
● Signing/Scanning + OpenShift integration ● Web Console Pod
● Storage - CNS Gluster Block, AWS EFS, CephFS ● CNS (Resize, vol custom naming, vol metrics)
● OverlayFS with SELinux Support (RHEL 7.4)
● User Namespaces (RHEL 7.4)
● System Containers for docker

Q4 CY2017 Q2 CY2018
OpenShift Container Platform 3.10 (June)
Q3 CY2017 OpenShift Container Platform 3.7 (December) Q1 CY2018 ● Kubernetes 1.10 and CRI-O and Buildah (Tech Preview)
● Kubernetes 1.7 & Docker 1.12 ● Custom Metrics HPA
● Red Hat OpenShift Application Runtimes (GA) ● Smart Pruning
● Service Catalog/Broker & UX (GA) ● Istio (Dev Preview)
● OpenShift Ansible Broker (GA) ● IPv6 (Tech Preview)
● AWS Service Broker ● OVN (Tech Preview), Multi-Network, Kuryr, IP per Project
● Network Policy (GA) ● oc client for developers
● CRI-O (Tech Preview) ● AWS AutoScaling
● CNS for logging & metrics (iSCSI block), registry ● Golden Image Tooling and TLS bootstrapping
● CNS 3X density of PV’s (1000+ per 3 node, Integrated Install ● Windows Server Containers (Dev Preview))
● Prometheus Metrics and Alerts (Tech Preview) ● Prometheus Metrics and Alerts (GA)
● OCP + CNS integrated monitoring/Mgmt, S3 Svc Broker

3
OCP 3.9 - Extensible Application Platform
● Service Expansion
● Database APBs, SCL 3.0, Catalog view enhancement
● Security
● Auditing, Jenkins secret integration, private repo ease of use
● Manageability
● CFME 4.6, HAProxy 1.8, Egress port control, Soft Prune, PV resize
● Workload Diversity
● Device Manager, Local Storage
● Container Runtime
● CRI-O
EXCITING MIDDLEWARE SERVICES UPDATES

- high-performance rule processing service based


on the Drools 7 community project, with
extensions for complex event processing (CEP).

- guided rules editor, decision tables, and


web-based rule authoring, testing, and
deployment tools.

- business resource optimization tool based on


the OptaPlanner community project.

- managed repository for rule definitions, with


built-in governance workflows to ensure that
changes and updates are properly controlled.
EXCITING MIDDLEWARE SERVICES UPDATES

● Node core distro to be delivered only through RHOAR, no stand alone SKU
○ Evaluating NPM modules for future support, with focus on microservice development and deployment concerns
● Non-Distro efforts
○ Tooling & boosters for RHOAR integration
● Booster coverage
○ Showcases features in Node.js specific to RHOAR/microservices
○ Work continues on infrastructure/workflow
● Consumption
○ S2I images (supported for v8, unsupported but available for v9/v10)
March
○ Openshift Streams integration
12th!
Self-Service / UX
Expose and Provision Services

OpenShift OPENSHIFT OpenShift


Template Templates
Broker

OpenShift ANSIBLE Ansible


Ansible Playbook
Broker Bundles

AWS AMAZON WEB SERVICES Public


Service Cloud
Broker Services

Other OTHER COMPATIBLE SERVICES


Other
Service
Services
Brokers

OPENSHIFT SERVICE CATALOG SERVICE BROKERS

7
Self-Service / UX
Feature(s): OpenShift Ansible Broker
What’s New for 3.9:
● New upstream community website: Automation Broker
● Downstream will still be called ‘OpenShift Ansible Broker’ with main focus on APB ‘Service Bundles’ (application definition)
● Community contributed application repo: https://github.com/ansibleplaybookbundle
● Support for running the broker behind an HTTP proxy in a restricted network environment
● Documentation: https://github.com/openshift/ansible-service-broker/blob/master/docs/proxy.md
● Video: https://www.youtube.com/watch?v=-Fdfz1RqI94
● Plan or parameter updating of PostgreSQL, MariaDB, and MySQL APB-based services will preserve data
● Update logic in the APB that handles preserving data; useful for cases where you want to move between a service plan with
ephemeral storage to a different service plan utilizing a PV
● Video: https://www.youtube.com/watch?v=kslVbbQCZ8s&t=220s
● Now Official add-on for MiniShift
● Documentation: https://github.com/minishift/minishift-addons/tree/master/add-ons/ansible-service-broker
● Video: https://www.youtube.com/watch?v=6QSJOyt1Ix8
● Network isolation support for multi-tenant environments
● For joining networks that are isolated to allow APBs to talk to the resulting pods it creates over the network
● [Experimental] Async bind support in Broker
● Used to allow binds that need more time to execute than the 60 seconds response time defined in the OSB API spec.
● Async bind will spawn a binding job and return the job token immediately; the catalog will use the last_operation to monitor the
state of the running job until either successful completion or a failure.

8
Self-Service / UX

Feature(s): Catalog from within project


view

Description: Quickly get to the catalog from


within a project
How it Works:
● “Catalog” item in left navigation
Self-Service / UX

Feature(s): Quick search catalog


from within project view

Description: Need to quickly find


services
How it Works:
● Type in your search criteria
● Get minimal service icon
Self-Service / UX

Feature(s): Select preferred home page

Description: Power users may want to


jump straight certain pages after login
How it Works:
● Access the menu from account
dropdown
● Pick any of: Catalog Home, All
projects, Specific project
● Logout and then back in
● Enjoy!
Self-Service / UX

Feature(s): Configurable inactivity timeout

Description: Configure web console to log


user out after a set timeout
How it Works:
● Default is 0 (never)
● Set ansible variable to # of minutes

openshift_web_console_inactivity_timeout_minutes=n
Self-Service / UX

Feature(s): Console as separate pod

Description: Separate web console out of


API server
How it Works:
● Web console packaged as a container
image
● Deployed as a pod
● Configuration can be made via
ConfigMap and auto-detects changes
Self-Service / UX

Feature(s): StatefulSets out of tech preview

Description: Removed tech preview label


How it Works:
● Same capability as tech preview feature in 3.7
DevExp / Builds

Feature(s): Jenkins memory usage improvements

Description: Jenkins worker pods often consume too


much or too little memory
How it Works:
● Startup script intelligently looks at pod limits
● JVM env vars appropriately set to ensure limits
are respected for spawned JVMs
DevExp / Builds Miscellaneous

● ‘oc cluster up’ allow for number of PVs to create


● Ability to specify default tolerations
● Toleration of CRI-O in build scenarios
● Secrets available in Jenkins as credentials
Dev Tools - Local Dev

Minishift 1.14 / CDK 3.3:


● Many improvements around addons:
dependencies, management, …
● Caching of container images
● Static IP for HyperV
● Host folder mounts using sshfs
Dev Tools - SCL 3.0!

3.4 10.2
UPDATED

9.6

NEW
1.12

8
7.1 3.6
Networking 3.7

Feature(s): Semi-automatic namespace-wide egress IP 3.9


Description: All outgoing external connections from a Stability enhancements that will enable in 3.10:
project will share a single fixed source IP address and will ● HA
send all traffic via that IP, so that external firewalls can ● “Semi-Automatic” → “Automatic”
(no longer a manual admin process)
recognize the application associated with a packet.

How it Works:
● Supported by the multitenant / networkpolicy plugins ● Once claimed, a pod in that NetNamespace on that
● Egress IPs do not accept connections on any port node will be able to send traffic to external IPs, with that
EgressIP as the source of traffic
● NetNamespace has an EgressIPs array that can be set
● For a pod in that NetNamespace on a different node,
(though only one IP, currently) for the egress IP traffic will first travel via VXLAN to the node hosting the
● The Egress IP must be on the local subnet of the node's egress IP, then it will be able to send traffic to external
primary network interface (added as additional address IPs
on that interface) ● Egress traffic from pods in other NetNamespaces are
● Once EgressIPs is set on a NetNamespace, and until still NAT’d to the primary IP address of the node, just
the EgressIP is claimed, pod-to-pod traffic is allowed, like in the no-automatic-egress-IP case
but pod-to-external traffic is dropped
How the HAProxy “soft reload” used to work:
Networking
The new process with its
Feature(s): Support our own HAProxy RPM for new configuration tries to
bind to all listening ports
consumption by the router
The new process
Succeed
Description: Route configuration changes and process listens for incoming
Fail connections.
upgrades performed under heavy load have typically
required a stop/start sequence of certain services, The new process sends a
signal to the old
causing temporary outages. There existed iptables process(es) asking it to
“trickery” to work around the issue. temporarily release the port
ports may not be
bound by any process...
In OpenShift 3.9, HAProxy 1.8 sees no difference
Try again
between updates and upgrades; a new process is used Signal the old
with a new configuration, and the listening socket’s file Succeed
process it can quit
once it has finished
descriptor is transferred from the old to the new process Fail serving existing
so the connection is never closed. The change is connections
Give up and signal the old
seamless, and enables our ability to do things, like process to continue taking
HTTP/2, in the future. care of the incoming
connections
Master
Feature(s): StatefulSets / DaemonSets / Deployments no longer Tech Preview

Description: The core workloads API, which includes the DaemonSet,


Deployment, ReplicaSet and StatefulSet kinds, has been promoted to GA
stability in upstream Kubernetes.

For OpenShift, this means that StatefulSets, DaemonSet and Deployments are
now stable/supported and the Tech Preview label is removed in OpenShift 3.9.

Additional Information:
● StatefulSets
● DaemonSets
● Deployments
Master
Feature(s): Central Audit Capability
Description: Provides auditing of items that admins would like to…
View (examples): How It Works:
● Event Timestamp
● The activity that generated the entry Setup auditing in the master-config file, and restart the
● The API endpoint that was called master-config service:
● The HTTP output
● The item changed due to an activity, with details of the change auditConfig:
● The username of the user that initiated an activity auditFilePath: "/var/log/audit-ocp.log"
● The name of the namespace the event occurred in where possible enabled: true
● The status of the event, either success or failure maximumFileRetentionDays: 10
maximumFileSizeMegabytes: 10
Trace (examples): maximumRetainedFiles: 10
logFormat: json
● User login and logout from (including session timeout) the
policyConfiguration: null
web interface, including unauthorised access attempts
policyFile: /etc/origin/master/audit-policy.yaml
● Account creation, modification, or removal webHookKubeConfig: ""
● Account role/policy assignment/de-assignment webHookMode: ""
● Scaling of pods
● Creation of new project or application
● Creation of routes and services
● Triggers of builds and/or pipelines
● Addition/removal or claim of persistent volumes
Master
The old (pre-3.9) output:
Feature(s): Add support for Deployments to oc status $ oc-3.7 status
In project dc-test on server
Description: Provides similar output for upstream https://127.0.0.1:8443

deployments as can be seen for downstream svc/ruby-deploy - 172.30.231.16:8080


DeploymentConfigs, with nested deployment set. pod/ruby-deploy-5c7cc559cc-pvq9l runs test

How it Works:
$ oc status
In project My Project (myproject) on server https://127.0.0.1:8443

svc/ruby-deploy - 172.30.174.234:8080
deployment/ruby-deploy deploys istag/ruby-deploy:latest <-
bc/ruby-deploy source builds https://github.com/openshift/ruby-ex.git on istag/ruby-22-centos7:latest
build #1 failed 5 hours ago - bbb6701: Merge pull request #18 from durandom/master (Joe User
<joeuser@users.noreply.github.com>)
deployment #2 running for 4 hours - 0/1 pods (warning: 53 restarts)
deployment #1 deployed 5 hours ago
Tech
Master Preview

Feature(s): Dynamic Admission Controller follow-up

Description: An admission controller is a piece of code


that intercepts requests to the Kubernetes API server
prior to persistence of the object, but after the request is
authenticated and authorized.

To assist admission controller developers, the upstream


documentation has been enhanced and a blog post that
explains how it works was created.

How it Works (example Use Cases):


● Mutation of pod resources
● Security response
Master
Feature(s): Feature Gates

Description: Platform admin now have the ability to turn


off specific features for the entire platform. This will
assist in controlling access to alpha, beta, or tech preview
features in production clusters.

How it Works: Feature gates use a key=value pair in the


master and kubelet config files that describes the feature
you wish to block.
Full list
Control Plane: master-config.yaml kubelet: node-config.yaml
kubernetesMasterConfig: kubeletArguments:
apiServerArguments: feature-gates:
feature-gates: - DevicePlugin=true
- CPUManager=true
E2E Provider Integration
Updated Reference Architecture Implementation Guides
Release: ocpsupplemental-3.9 (4-6 weeks after 3.9 GA)
Deploy and Management of the following supported combinations:
● OpenShift 3.9 on Red Hat OpenStack Platform 10 (RH-OSP)
● OpenShift 3.9 on Amazon Web Services (AWS)
● OpenShift 3.9 on Microsoft Azure
● OpenShift 3.9 on VMWare vSphere
● OpenShift 3.9 on Red Hat Virtualization 4.21 (RHV)
● OpenShift 3.9 on Google Cloud Platform (GCP)2

Deprecation of unsupported “glue code” (ancillary scripts, ansible playbooks, related GitHub repos, …)
● No longer required as we’re using the provisioner code provided by the installer itself
● All cloud providers

1
The release dates for the Ref Arch update and RHV 4.2 are very close, so this may fall back to 4.1.
2
At-risk.
Questions
OpenShift = Enterprise Kubernetes+
Build, Deploy and Manage Containerized Apps
CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER

SELF-SERVICE

SERVICE CATALOG
(LANGUAGE RUNTIMES, MIDDLEWARE, DATABASES, …)

BUILD AUTOMATION DEPLOYMENT AUTOMATION

APPLICATION LIFECYCLE MANAGEMENT


(CI / CD)

CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT


(KUBERNETES)

LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS

INFRASTRUCTURE AUTOMATION & COCKPIT

OCI CONTAINER RUNTIME & PACKAGING

ATOMIC HOST /
RED HAT ENTERPRISE LINUX
Clustered Container Infrastructure
Applications Run Across Multiple Containers & Hosts

CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER

CONTAINER ORCHESTRATION & CLUSTER MANAGEMENT


(KUBERNETES)

LOGS &
NETWORKING STORAGE REGISTRY SECURITY
METRICS

OCI CONTAINER RUNTIME & PACKAGING

ATOMIC HOST /
RED HAT ENTERPRISE LINUX
Red Hat Contributing Projects:
Container Orchestration ● Job Failure Policy
● Kubectl plugins
Feature(s): Kubernetes Upstream Red Hat ● Pod level QoS
Blog and Commons Webinar ● PV resizing
● Mount namespace
● CRD
Description: OCP 3.9 is a double rebase ● CronJob
release. We literally had to go through the ● HPA Metrics
same release motions twice. Red Hat ● StorageClass ReclaimPolicy
continues to influence the product in the areas ● Rules View API
● RBAC
of Storage, Networking, Resource
● Mount Options
Management, Authentication&Authorization, ● LIST queries
Multi-tenancy, Security, Service Deployments ● ClusterRole
and templating, and Controller functionality. ● Containerized Mounts
● PV to Pod track and Delete
● Raw Block Storage
OpenShift 3.9 Status of Kube 1.8 and 1.9 Upstream Features:
https://docs.google.com/spreadsheets/d/1xdjfFVyoUaDgZXak4OHA90wq_bNIKrrc7U2xr8fKXEU/edit?usp=sharing
Container Orchestration
Feature(s): Feature tracking documentation

Description: My customer is having a difficult


time knowing what support status a specific
feature is in for a specific release of OpenShift.

How it Works: We have decided to add a table


to the user guide to more clearly depict this
information.
Tech Scheduler
Device Manager Preview Deep Learning Pod
Feature(s): Device Plugins for Specialized Hardware resources:
limits:
Description: People would like to set resource limits
nvidia.com/gpu: 3
for hardware devices within their pod definition and
have the scheduler find the node in the cluster with
kubelet
those resources. While at the same time, device
manager
Kubernetes needed a way for hardware vendors to
advertise their resources to the kubelet without
forcing them to change core code within Kubernetes.
NVIDIA (Hardware Vendor
daemonSet Provided)

How it Works: The kubelet now houses a device


manager that is extensible through plugins. You
load the driver support at the node level. Then you
or the vendor writes a plugin that listens for
requests to stop/start/attach/assign/etc the
Device Drivers (Hardware Vendor
requested hardware resources seen by the drivers. Provided)
This plugin is deployed to all the nodes via a
daemonSet.
Registry
Feature(s): “Soft” Image pruning

Additional registry work:


Description: Don’t remove actual image, just
free update etcd storage ● Mirror manifests with image, to allow for pulling
image when source image unavailable
● Move registry to separate registry - further agility
How it works: ● Investigate usage of fsck for corrupt image
● Safer to run --keep-tag-revisions and reporting
--keep-younger-than
● After this is run, admins can choose to
run hard prune (which is safe to run as
long as the registry is put in read only
mode).
Installation Preparation:

1. Validate 3.7 storage migration the day before the upgrade:


Feature(s): Automated 3.7 to 3.9 control plane upgrade # oc adm migrate storage --include=* --loglevel=2

* If any errors search bugzilla or open a support case to remediate storage


Description: The installer automatically handles stepping the control plane from problems
3.7 to 3.8 to 3.9 and node upgrade from 3.7 to 3.9.
2. Enable OCP 3.8 and 3.9 repos on all hosts
How it Works: # subscription-manager repos --disable="rhel-7-server-ose-3.7-rpms" \
--enable="rhel-7-server-ose-3.8-rpms" \
1. Control plane components [API, Controllers, Node (on control plane hosts)] are --enable="rhel-7-server-ose-3.9-rpms" \
upgraded seamlessly from 3.7 to 3.8 to 3.9 --enable="rhel-7-server-ansible-2.4-rpms" \
a. Data migration happens pre and post 3.8 and 3.9 control plane upgrades --enable="rhel-7-server-extras-rpms" \
--enable="rhel-7-fast-datapath-rpms"
2. Other control plane components [Router, Registry, Service Catalog, Brokers] are
upgraded from 3.7 to 3.9 3. Install 3.9 playbooks
3. Nodes [node, docker, ovs] are upgraded directly from 3.7 to 3.9 with only one # yum upgrade openshift-ansible
drain of nodes
Upgrade:
a. 3.7 nodes operate indefinitely against 3.8 masters should the upgrade
process need to pause in this state 1. When Control Plane is upgraded independently of Nodes:
4. Logging and metrics are updated from 3.7 to 3.9 # playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_control_plane.yml
# playbooks/openshift-logging/config.yml
Notes: # playbooks/openshift-metrics/config.yml
● Recommended/preferable to upgrade control plane and nodes independently
● You can still perform the upgrade all in one playbook (but rollback is 2. Assumes preparation steps of enabling repos has already
more difficult) happened and all-in-one upgrade.yml was not used.
● Playbooks do not allow for a clean install of 3.8 # playbooks/byo/openshift-cluster/upgrades/v3_9/upgrade_nodes.yml
Installation
Feature(s): Improved playbook performance

Description: Significant refactoring and restructuring of playbooks in 3.9 to improve performance.

How it Works:
● Restructured playbooks to push all fact gathering and common dependencies up into the
initialization plays so they’re only called once rather than each time a role needs access to their
computed values.
● Refactored playbooks to limit the hosts they touch to only those that are truly relevant to the
playbook.
● As an example, prior to these changes upgrading the control plane in our large online environments spent
>40 minutes gathering useless facts from 290 compute nodes that aren't relevant to the control plane
upgrade.
● Initial results showed a large reduction in overall installation times; up to 30% faster in some cases
Installation
Feature(s): Quick installation [deprecated]
Description: Quick installation is being deprecated in 3.9 and will be removed in 3.10
How it Works:
● quick installation will only be capable of installing 3.9
● It will not be able to upgrade from 3.7 or 3.8 to 3.9.
● The `atomic-openshift-installer upgrade` function will exit with a message indicating updates are not supported under this
version of the quick installer
● If an attempt to upgrade is made, reference the documentation explaining how to migrate from the existing quick
installer generated inventory to using openshift-ansible directly.

● openshift-ansible (advanced installation) will be the replacement for quick installation


● Refer to the Installation and Configuration section of the OpenShift documentation.

● As part of the deprecation effort in 3.9:


● Using an existing quick installer generated inventory to perform an upgrade from 3.7 to 3.9 will be documented
● A localhost inventory will be provided that requires *zero* modification
● Updated hosts.example will be provided so that everything that an admin would need to modify appears on the first
screen (masters, nodes, etcd group definition), making it clear that all other variables are optional
Storage
How it Works/Example:
Feature(s): End to End Online Expansion • Add to storage class AllowVolumeExpansion=true
(Resize) for CNS gluster-fs PV’s
• oc edit pvc claim-name

Description: Users can expand their persistent • Edit the field ‘spec→ requests → storage: new value’
volume claims online from OCP for CNS
glusterFS volumes

• Can be done online from OCP


• Previously only available from Heketi CLI
• User edits PVC for the new size, triggering PV resize
• Fully Qualified for glusterFs backed PV’s
• Gluster-block PV resize will be added with RHEL 7.5
• Demo Video
Storage
Feature(s): PV Resize
How it Works:

Description: Users can expand their persistent - Create a storageclass with AllowVolumeExpansion=true
- PVC uses the storageclass and submits a claim
volume claims online from OCP for following - Resize: PVC specifies a new increased size
- Underlying PV is resized
storage backends:
● CNS glusterFS
● gcePD
● cinder
Storage
Feature(s): CNS GlusterFS PV Consumption metrics Prometheus
available from OCP

Description: CNS GlusterFS extended to provide PV


volume metrics (including consumption) through
Prometheus or Query

How it Works:
● Metrics available from PVC end point
● User can now know PV size allocated as well as consumed
and use resize (Expand) of PV if needed from OCP ‘curl’
● Example Metrics added # TYPE kubelet_volume_stats_available_bytes gauge
● kubelet_volume_stats_capacity_bytes kubelet_volume_stats_available_bytes{namespace="default",p
● kubelet_volume_stats_inodes ersistentvolumeclaim="claim1"} 8.543010816e+09
● kubelet_volume_stats_inodes_free # TYPE kubelet_volume_stats_capacity_bytes gauge
● kubelet_volume_stats_inodes_used kubelet_volume_stats_capacity_bytes{namespace="default",pe
● kubelet_volume_stats_used_bytes ....etc rsistentvolumeclaim="claim1"} 8.57735168e+09
Storage
Example
Feature(s): CNS now supports Custom
[root@localhost cluster]# cat ../demo/glusterfs-storageclass_fast.yaml
Volume Naming at backend apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: fast
Description: OCP Users can specify custom provisioner: kubernetes.io/glusterfs
parameters:
volume names (prefixes) for PV’s from CNS resturl: "http://127.0.0.1:8081"
restuser: "admin"
backed storage class. secretNamespace: "default"
secretName: "heketi-secret"
volumenameprefix: "dept-dev"
How it Works:
● Previously PV Names (vol_<UUID> , vol_1213456) PV Names: dept-dev_storageproject_claim1_12312321
● Specify new attribute in CNS storage class called
VolumeNamPrefix_NameSpace_ClaimName_UUID
'volumenameprefix'
● CNS backend volumes will be named User supplied Name Space Claim UUID
myPrefix_NameSpace_PVCClaimName_UUID Prefix Project Name
● Easy to recognize, users follow naming convention, Name
● Easy to Search & Apply Policy based on prefix,
Namespace, Project Name, or Claim Name
● Demo Video
Storage OPENSHIFT NODE 1

Feature(s): Automated Container Native Storage APP CONTAINER


(CNS) deployment with OCP Advanced Installation
OPENSHIFT NODE 2 OPENSHIFT NODE 3
Description: In OCP Advanced Installer
APP Container APP Container
● Fixed CNS Block Provisioner deployment MASTER

● Added CNS UnInstall Playbook RHGS Container RHGS Container

How it Works:
● CNS storage device details are added to the
installer’s inventory file
● The advanced installer manages configuration RHGS Container
OPENSHIFT NODE 4
and deployment of CNS, file & block
provisioners, registry and ready to use PV o OCP + CNS deployed as one cluster
o CNS with Block & File provisioners deployed
o OCP Registry deployed on CNS
o Ready to deploy Logging, Metrics on CNS
Tech
Logging Preview

Feature(s):
How it Works:
syslog output plugin for fluentd
OpenShift Ansible Installer for Logging
Note: blocker bug will be delivered in 3.9.z; so GA will happen
in conjunction with that
openshift_logging_fluentd_remote_syslog = true
openshift_logging_fluentd_remote_syslog_host =
Description: <hostname> or <IP>
Users would like to send logs (system and openshift_logging_fluentd_remote_syslog_port = <port no,
container) from OCP nodes to external defaults to 514>

endpoints using the syslog protocol. The fluentd openshift_logging_fluentd_remote_syslog_severity =


<severity level, defaults to debug>
syslog output plugin supports that.
Limitations: logs sent via syslog are not encrypted and
therefore insecure
Tech
Metrics Preview

Feature(s):
How it Works:
● Prometheus stays in (Tech Preview)
● Prometheus, AlertManager and AlertBuffer ● New OpenShift installer playbook for
versions are updated installing Prometheus server, alert
● node_exporter included manager and oAuth-proxy
● Note: Hawkular is still the supported ● Deploys Statefulset comprising server,
Metrics stack alert-manager, buffer and oAuthProxy in
front and a PVC one for server and one
for alert manager
Description: ● Alerts can be created in a rule file
OpenShift Operators deploy Prometheus on an OCP cluster, and selected via inventory file
collect Kubernetes and Infrastructure metrics, get alerts.
Operators can see and query metrics and alerts on
Prometheus web dashboard. Or They can bring their own
Grafana and hook it up to Prometheus.
CFME 4.6 Container Mgmt
● OpenShift Template Provisioning
● Off-line OpenScap Scans
● Alert Management (Prometheus) - Tech Preview
● Reporting Updates
● Provider Updates
● Chargeback Enhancements
● UX Enhancements
4
4
Trusted Container OS
Containers Depend on Linux

CONTAINER CONTAINER CONTAINER CONTAINER CONTAINER

OCI CONTAINER RUNTIME & PACKAGING

ATOMIC HOST /
RED HAT ENTERPRISE LINUX
RHEL 7.5 Highlights

OpenShift Container Platform 3.9 is supported on


RHEL 7.3, 7.4, 7.5 and Atomic Host 7.4.5+.

Containers / Atomic

● Docker 1.13
● Docker-latest deprecation
● RPM-OSTree package overrides Storage

Security ● Virtual data optimizer (VDO) for dm-level dedupe


and compression.
● Unprivileged mount namespace ● OverlayFS by default for new installs (overlay2)
● KASLR full support and enabled by default. ○ Ensure ftype=1 for 7.3 and earlier
● Ansible remediation for OpenSCAP ● Devicemapper continues to be supported and
● Improved SELinux labeling for cgroups available for edge cases around POSIX
(cgroup_seclabel) ● LVM snapshots integrated with boot loader (boom)
Tech
CRI-O v1.9 Preview

Feature(s): CRI-O v1.9 - Will GA OpenShift 3.9.z

Improvements include:
Description: CRI-O is an OCI compliant
implementation of the Kubernetes Container ● New CLI (podman) shipping in 7.5.z
Runtime Interface. By design it provides only the ● Image volume handling
runtime capabilities needed by the kubelet. CRI-O is ● Registry listings
designed to be part of Kubernetes and evolve in ● Pids cgroups controls
lock-step with the platform. ● SELinux support
CRI-O brings:
CNI Networking RunC
Kubelet
● A minimal and secure architecture Storage Image
● Excellent scale and performance
● Ability to run any OCI / Docker image
● Familiar operational tooling and commands
Buildah
Start from an existing
Feature: Buildah moving to full support with image or from scratch
RHEL 7.5

Description: Buildah is a daemon-less tool for


building and modifying OCI / Docker images. Generate new layers
and/or run commands
● Preserves existing Dockerfile workflow on existing layers
and instructions
● Allows fine-grain control over image
layers, the content, and commits
Commit storage and
● Utilities on the container host can generate the image
optionally be called for the build. manifest

● Shares the underlying image and storage


components with CRI-O Deliver image to a local store
or remote OCI / docker
registry

Вам также может понравиться