Вы находитесь на странице: 1из 195

M1

NCP Bootcamp 2019


David Espejo
GTM Partner Systems Engineer, NOLA

1
M1

Module 1 – Introducing the


Nutanix Enterprise Cloud
Nutanix ECA 5.5

2
Agenda M1
1. Introducing the Nutanix Enterprise 8. AHV Workload Migration
Cloud
9. Acropolis Services
2. Managing a Nutanix Cluster
10. Data Resiliency
3. Securing a Nutanix Cluster
11. Data Protection
4. Networking
12. Prism Central
5. VM Creation and Management
13. Cluster Maintenance
6. Health Monitoring and Alerts
14. Life Cycle Operations
7. Distributed Storage Fabric

3
M1

Enterprise Cloud Concepts

6
What is the Nutanix Enterprise Cloud? M1
A full set of infrastructure services delivered

through a single OS (AOS)


• Infrastructure Services (File, Block, Compute)

• Integrated Data Services (Data Protection, High


Availability)

• Platform Services (Databases, Big Data,


Analytics, Messaging)
Available as software-only or integrated with a variety of

hardware platforms for a true hyperconverged solution


7
What is Hyper-convergence? M1
Hyper-convergence streamlines
deployment, management and scaling
of datacenter resources by combining:
• x86-based storage and compute resources
• Software-based system intelligence
• Virtualization
• Management
• API-based automation and rich analytics
Separate infrastructure
..into a single, scalable solution components are brought
together in a unified,
simplified solution
8
Nodes, Blocks and Clusters M1

A node consists of a
single physical host
and a Controller VM
(CVM) A block consists of one
to four nodes mounted
in a chassis, sharing
power, backplane and
cooling A cluster consists of multiple nodes
(typically in blocks) converged to
deliver a unified pool of infrastructure
resources
10
Node and Block Example M1
Node

Block

11
Cluster Example M1
• Joining multiple nodes in a cluster allows for the pooling of resources
o A Nutanix Enterprise Cloud consists of a cluster containing 3 or more nodes
• Storage is presented as a single pool via the Controller VM (CVM)
Node 1 Node 2 Node N

Controller Controller Controller


Guest VM Guest VM Guest VM
VM VM VM

Hypervisor Hypervisor Hypervisor

SCSI Controller SCSI Controller SCSI Controller

12
Creating a Cluster with Different Products M1
Nodes in cluster can be from different
vendors or processor versions
• Always check the Compatibility Matrix
• G4 and G5 nodes can be in the same
cluster, but not on the same block
• Hardware from different vendors cannot
be in the same cluster
• Nodes with Self-Encrypting Drives (SED)
and standard HDD/SDD can be in the
same cluster

Support
Portal/Documentation 13
Software-Only Features – Acropolis M1
Acropolis Software Editions
STARTER PRO ULTIMATE

The full suite of Nutanix software


Core set of software Rich data services, resilience and
capabilities to tackle complex
functionality management features
infrastructure challenges

Enterprise Storage Starter + Pro +


Acropolis Block Services Acropolis File Services
Compression VM Flash Mode
Deduplication
Erasure Coding
Infrastructure Resilience Starter + Pro +
+1 Redundancy Factor
Availability Domains
Data Protection Starter + Pro +
Self Service Restore Multiple Site DR
Cloud Connect Metro Availability
DR with NearSync/Sync Replication
Security Starter + Pro +
Cluster Lockdown Data-at-Rest Encryption 15
Software-Only Features – Prism M1
Prism Software Editions
STARTER PRO

Comprehensive single and multi- VM operations & systems management with


site systems management of advanced machine intelligence, operations
Nutanix clusters & automation

Cluster Management Starter +


One-Click Centralized Upgrades
One-Click Planning Starter +
Capacity Behavior Trends
Just in Time Forecast
VM Right Sizing
One-Click Performance Monitoring Starter +
Bottleneck Detection
Anomaly Detection
One-Click Operational Insights Starter +
Prism Advanced Search
Customizable Dashboard
Scheduled Reporting 16
M1

Storage Concepts

17
Enterprise Cloud Storage Components M1
Storage in a Nutanix Enterprise Cloud is represented by

both physical and logical components


• Storage Pool – Physical component
consisting (by default) of all SSDs and
HDDs in a cluster
• Storage Container – Logical component
carved from a Storage Pool designed to
hold vDisks
• vDisk – Logical component carved from a
Storage Pool that provides storage to a 18
Storage Pool M1
• Physical representation of all storage from nodes in the cluster
• Single pool created by default (multiple pools not recommended)
o Storage can only belong to one pool at a time
• Appears like a storage array to hypervisors
AHV AHV AHV AHV
vDisk vDisk vDisk vDisk

ISO Container VM Container

Storage Pool

Node Node Node Node 19


Storage Container M1
• Logical division of a storage pool’s resources
• Thin provisioned
• Can be configured with compression, deduplication, redundancy
factor and so on
• Appears like a storageAHV
AHV
array to hypervisors
AHV AHV
vDisk vDisk vDisk vDisk

ISO Container VM Container

Storage Pool

Node Node Node Node 20


vDisk M1
• Virtual disks or swap files used by virtual machines (any file larger
than 512k)
• Carved from a Storage Container
• Composed of extents grouped and stored on disk as an extent group
AHV AHV AHV AHV
vDisk vDisk vDisk vDisk

ISO Container VM Container

Storage Pool

Node Node Node Node 21


M1

Acropolis Overview

22
Acropolis Components M1

• Software stack consists of:


• Acropolis Hypervisor (AHV) Acropolis Hypervisor
Acropolis
• Distributed Storage Fabric
• App Mobility Fabric
(DSF) Intelligent Application Placement and Migration

• App Mobility Fabric (AMF) • Distributed Storage Fabric


High-Performance Storage Services for All Applications

• Acropolis Hypervisor
Comprehensive, Enterprise-class Virtualization

24
I/O Path M1
• Oplog DSF I/O Path
Read I/O
• Unified cache Sequential Random
Unified Memory Write I/O
Cache

• Extent store OpLog


SSD
Drain Cache

• Read Extent Store


HDD
• Write
• Replication Cloud NAS, etc. Extensible

33
Write I/Os M1
Write I/O

CVM
Local SSDs not full
Local SSDs full
1. Local SSD

Data migration
2. Remote SSD

3. Local HDD Data migration

4. Remote HDD

Note: Sequential I/O can bypass SSD and write directly to HDD
34
Data Locality and Live Migration M1

• Utilization Monitoring
• Live Migration
• Post Migration
User VMs User VMs User VMs

VM I/O VM I/O VM I/O


AHV AHV … AHV

SCSI Controller CVM SCSI Controller CVM SCSI Controller CVM

DSF

35
M1

Resources

37
Support Portal – Documentation M1

41
Support Portal – Knowledgebase M1

42
M1

New Features in 5.5

49
AHV Turbo M1

• IO path optimization
• Remote Direct
Guest Memory Access
(RDMA)
• New IO Subsystem
Host designed to deliver
increased
HW
performance

50
NearSync Replication M1
• New replication method in addition
to asynchronous and synchronous
o Provides better protection for
mission-critical applications
• Able to achieve 1-minute RPO
times
• VM/Application level retention and
restores
• No latency/distance restrictions
51
NearSync Replication – Light-weight
Snapshots M1
• Continuous Data Protection like technology
• LWS Store located on SSD
• Snapshots are replicated to remote system
• User can restore from any available LWS, no matter
how minute
• Continue using Protection Domain-based workflows
o Introduce time-based retention policies
o Automatically creates multiple (roll-up) schedules
52
Xtract M1
• Automates the MANY steps required to manually migrate or
rebuild VMs and applications to new infrastructure
o Xtract for VMs – Automates “lift and shift” VM migrations
o Xtract for DBs – Migrates full DB instances at the application level

Typical process required for manual replace or rebuild 53


Xtract for Migration M1
• Migrations with one-click
simplicity
• Near-zero application or VM
service outage
o Full-cutover control
• Enables migration testing
• Included with ALL Nutanix
software editions

54
M1

Cluster Components

55
Cluster Components M1

Gestión de los datos Consola de


e interfaz hacia administración /
hypervisor (via NFS, endpoint API
iSCSI o SMB)

Base distribuida de Configuración del


metadatos clúster

Gestión y
mantenimiento del
cluster (MapReduce)
56
M1

Module 2 - Managing
a Nutanix Cluster
Nutanix ECA 5.5

58
Management Interfaces - Overview M1
Several methods to manage a Nutanix implementation:
• Graphical UI – Prism Element and Prism Central
o Preferred method for management
o Manage entire environment (when using Prism Central)
• Command Line Interfaces
o nCLI – Get status and configure entities within a cluster
o ACLI – Manage the Acropolis portion of the Nutanix environment
• Nutanix PowerShell Cmdlets – For use with Windows PowerShell
• REST API – Exposes all GUI components for orchestration and
automation
59
M1

Prism Element- Initial


Configuration

60
Accessing Prism Element M1
• First-time Login:
• Point a browser to http://management_ip_addr
o Browser will redirect to port 9440
o An SSL warning may appear
o If a Welcome screen displays, accept the terms

• Enter Default Credentials:


• Username = admin
• Password is AOS version-dependent
• Change Password:
• Follow password requirements
• Complexity increased starting with AOS 5.1 61
Initial Configuration – Accepting the
EULA
M1
On first login (or if EULA
changes) a license agreement is
displayed
• Read, fill out, and acknowledge
the EULA
• This includes the terms and
conditions to which you agree by
using Prism
Agreement applies to all

Nutanix software and


documentation
62
Initial Configuration – Enabling Pulse M1
Pulse monitors cluster health and
proactively notifies customer
support if a problem is detected
• Automatically and unobtrusively
collects cluster data with no
performance impact
• Diagnostic data sent to support once
per day, per node
• Sent via e-mail to both support and user
• Proactive monitoring: different from Displays on first login or after
alerts an upgrade

• Enabled by default 63
Initial Configuration – Pulse Settings M1
Pulse is configured by selecting it
from the Gear
icon:
• Enable/Disable
• Set Email Recipients
• Configure Verbosity
o Basic: Collect basic statistics only
o Basic with Coredump: Collect basic
statistics plus core dump data

64
What gets Shared by Pulse? M1
Information collected and
• Information NOT shared with

shared includes: Nutanix Support:


• System alerts • Guest VMs
• Current Nutanix software • User data
version • Metadata
• Nutanix processes and • Administrator credentials
controller VM info
• Identification data
• Hypervisor details such as type
• Private information
and version
• System-level statistics
• Configuration information
65
M1

Command Line Interfaces

66
Command Line Interfaces – Overview M1
Run system administration commands against a Nutanix cluster
from:
• A local machine
• Any CVM in the cluster
Two CLIs:
• nCLI – Get status and configure entities within a cluster
• aCLI – Manage hosts, networks, snapshots and VMs the Acropolis
portion of the Nutanix environment
Acropolis 5.5 Command Reference Guide
• Contains nCLI, aCLI and CVM commands
67
Command Line Interfaces - nCLI M1
• Download the nCLI installer to a local machine
o Preferred method, nCLI commands can also be run from a CVM
• Open a command prompt (bash or CMD)
• Enter:
ncli -s management_ip_addr -u 'username' -p 'user_password'

68
nCLI Entities and Parameters M1

• Each entity has unique actions, but a common action


for all entities is list:
ncli> storagepool list

• Some actions require parameters:


ncli> datastore create name="NTNX-NFS" ctr-name="nfs-ctr"

• Parameter-value pairs can be listed in any order


• String values should be surrounded by quotes
o Critical when specifying a list of values
69
nCLI Example M1

Using nCLI to get cluster information


70
Command Line Interfaces - aCLI M1
• SSH to a CVM in
the cluster and
type acli
• Type exit to
return to CVM
shell

Using aCLI to place an AHV host into


maintenance mode

71
aCLI Examples M1

• Create a new virtual network for VMs:


net.create vlan.100 ip_config=10.1.1.1/24

• Add a DHCP pool to a managed network:


net.add_dhcp_pool vlan.100 start=10.1.1.100 end=10.1.1.200

• Clone a VM:
vm.clone testClone clone_from_vm=Edu01VM

72
M1

PowerShell Cmdlets

73
PowerShell Cmdlets – Overview M1
Nutanix provides a set of PowerShell Cmdlets to
perform system administration tasks using
PowerShell
• Install PowerShell v2 and .NET framework 4
• Download the installer from Prism Element
• A shortcut, NutanixCmdlets, is created
• Launching the shortcut opens a PowerShell window
with the Cmdlets loaded
• Automates common operational tasks on Nutanix
clusters
• API Reference available on Support Portal
74
M1

REST API

75
REST API – Overview M1

• Allows an external system to interrogate a cluster


using a script that makes REST API calls
• Uses HTTP requests (Get, Post, Put, and Delete) to
retrieve info or make changes to the cluster
• Responses are coded in JSON format
• Prism Element includes a REST API Explorer
oDisplays a list of cluster objects that can be managed
by the API
oSample API calls can be made to see output
76
REST API Explorer - Accessing M1
1. Connect to Nutanix API v2 by default
o Three versions, 1, 2 and 2.5
2. Click on any of the entities in the left-
hand column
3. List of operations available for an
entity

77
Sample API Call – Get VMs M1
1. Select an entity and operation
2. Provide parameters
3. Click Try it out!
4. View response

78
Sample API Call – Create a Network M1

Object has been created


79
Exercise: M1
• Knowledge Check

80
M1

Module 3 – Securing a
Nutanix Cluster
Nutanix ECA 5.5

81
Security Features M1
• Two-factor authentication
• Cluster lockdown
• Key management and
administration
• STIG Implementation
• Data at Rest Encryption

82
Two Factor Authentication M1
• Logons require a combination of a client
certificate and username and password

• Administrators can use local accounts or use


AD

• One-way: Authenticate to the server

• Two-way: Server also authenticates the


client

• Two-factor: Username/Password AND valid


certificate 83
Cluster Lockdown M1
• Administrators can restrict access to a Nutanix
cluster

• SSH sessions can be restricted through non-


repudiated keys
o Each node employs a public/private key-pair
o Cluster secured by distributing these keys

• Remote Login with password can be disabled

• SSH access can be completely locked down by


Configuring Cluster
disabling remote login and deleting all keys Lockdown in Prism Element

84
Key Management and Administration M1
• A key management server is used to authenticate

Nutanix nodes

• SEDs generate new encryption keys which are uploaded

to the KMS

• In the event of power failure or a reboot, keys are

retrieved from the KMS and used to unlock the SEDs

• Security keys can be instantly reprogrammed

• Crypto Erase can be used to instantly erase all data on


an SED while generating a new key
85
STIG Implementation M1
• Nutanix includes five custom STIGs installed with AOS
o AHV
o AOS
o Prism Web Server
o Prism Reverse Proxy
o JRE
• Provided in machine-readable XCCDF format
• Implemented via nCLI commands

86
STIG SCMA Monitoring M1
Security Configuration Management
Automation (SCMA)
• Monitors over 800 security entities covering
storage, virtualization and management

• Detects unknown or unauthorized changes and


can self-heal to maintain compliance

• SCMA output/actions are logged to syslog

Note: Detailed information and hands-on labs are part of the Advanced
Administration and Performance Monitoring course
87
Data at Rest Encryption (DARE) M1
Secures data while at rest using self-
encrypting drives and key-based access
management
• Data is encrypted on all drives at all times
• Data inaccessible in the event of drive or node
theft
• Data on a drive can be securely destroyed
• Key authorization enables password rotation at
arbitrary times
• Protection can be enabled or disabled at any time
• No performance penalty is incurred despite
encrypting all data 88
DARE Implementation M1
1. Install SEDs for all data drives in a cluster
o Drives are FIPS 140-2 Level 2 validated and use validated cryptographic modules

2. Controller VM provides a valid key to access data on a SED


3. Keys are stored in a KMS, CVM communicates with the KMS using the Key
Management Interoperability Protocol (KMIP) to upload and retrieve keys
o KMS should be deployed in clustered mode for resiliency and to ensure access to
data

4. If a node experiences a power failure, the CVM retrieves keys from the KMS
once it is back online
o If a drive is re-seated, it becomes locked
o If a drive is stolen, the data is inaccessible
o If a node is stolen, the KMS can revoke the node certificates to prevent data access
89
Configuring DARE in Prism Element M1
• DARE Configuration settings can
be accessed from the Gear menu
• If DARE is not on the menu, the
nodes don’t contain SIDs

Note: The Advanced


Administration and Performance
Monitoring course covers this
90
topic in detail
DARE Configuration (Native KMS) M1
Restricciones
Seleccionar AOS aplicará un Una KEK es
• Minimo 3 nodos Cluster´s Local DEK a todo dato generada para
• Si AHV: encripción KMS que lea/escriba cifrar la DEK
de todo el cluster
• Si ESXi/Hyper-V:
encripción por
container
• Licencia Ultimate Cifrado usando Se puede
o PRO con Add- Intel AES-NI para regenerar la KEK
reducir carga periódicamente
On
• No se puede
deshabilitar luego
este feature
91
DARE Configuration (Software Only+
External KMS)
M1
Restricciones
Obtener el root CA
• Minimo 3 nodos Generar CSRs
y subirlo
• Si AHV: encripción
de todo el cluster
• Si ESXi/Hyper-V:
encripción por
container
• Licencia Ultimate
o PRO con Add-
Subir los
On
certificados
• No se puede Agregar el KMS
deshabilitar luego firmados por CA
este feature para cada nodo
92
Use Case: App Segmentation
M1
Environment = Prod Environment = Dev

Category: Web Category: Shared Apps Category: Web Category: Shared Apps
IN: Any HTTP/S IN: App, DB IN: Any HTTP/S IN: App, DB
OUT: App OUT: None OUT: App OUT: None

Apache, NGINX AD, DNS Apache, NGINX AD, DNS

Category: App Category: DB Category: App Category: DB

IN: Web IN: App IN: Web IN: App


OUT: DB OUT: Shared Apps OUT: DB OUT: Shared Apps
OUT: Shared Apps OUT: Shared Apps

Django, Rails mysql, MS SQL Django, Rails mysql, MS SQL

93
Security Policies (Prism Central + Flow)
M1

94
Exercise: M1
• Knowledge Check

95
M1

Configuring Authentication

96
Configuring Authentication – Overview M1
• Adding a Directory List
• Enabling Client Authentication
• Regenerating a Self-Signed Certificate
• Importing a Certificate

97
Adding a Directory List M1

To add an authentication
directory:
1. Click the Directory List
tab
2. Click the New Directory
button
3. Select a Directory Type
from the drop-down
menu
Active Directory
Settings 4. Enter required details
OpenLDAP Settings
5. Click Save
98
Enabling Client Authentication M1
Effectively enables two-factor
authentication, by ensuring that
the user provides a valid
certificate
To enable client authentication:
1. Click the Client tab
2. Select the Configure Client
Chain Certificate check box
3. Click Choose File
4. Select a Client Chain Certificate Client Authentication Configuration
file for upload
5. Click Open 99
Role Mapping M1

100
M1

Module 4 – Networking
Nutanix ECA 5.5

101
M1

Networking Overview

102
Providing Network Connectivity to VMs M1
• Two methods:

• Managed – AHV uses IP Address Management to

give out IP addresses to VMs via DHCP pools

• Unmanaged – VMs directly connect to a VLAN

VMs connect to a virtual network before they


connect to a physical network

• Virtual networks created and managed using:


• Prism Element
• Acropolis CLI (aCLI )
• REST (API)
103
IP Address Management (IPAM) M1
How does IPAM work?
• VM NIC is added to IPAM-enabled
network
• VM requests IP address
• VXLAN encapsulates request
• AOS DHCP server assigns IP address
• IPAM associates NIC MAC address with
IP address, locks address
• IP address assigned to the VM is
persistent until the vNIC is deleted or
the VM is destroyed
104
Network Segmentation M1

Network segmentation is designed to management


traffic from backplane (storage and CVM) traffic
• Separates storage traffic from routable management
traffic for security purposes
• Separate virtual networks are created for each traffic type
• Several methods:
o Segment a network on an existing cluster
o Segment a network on an existing RDMA cluster
o Segment a network during cluster expansion
105
Network Segmentation Process M1
• Network for backplane traffic created on the existing default virtual switch

• eth2 interfaces and host interfaces placed on a newly created network


o RDMA also adds rdma0 interface to this network
• During a cluster expansion the existing segmentation configuration is
automatically applied

106
Creating the Backplane Network M1
To create the backplane
network, first click Configure for
eth2 in the Network
Configuration window, then:
1. Provide a non-routable subnet
2. Specify the netmask
2. Provide a VLAN ID if assigning
the interface to a VLAN
3. Click Verify and Save

Note: Successful validation restarts the


107
cluster
M1

vSwitch Implementation

108
vSwitch Implementation (AHV)
Overview
M1

Controller User User virbr0 Linux bridge


VM VM1 VM2
eth1 eth0 eth0 eth0
• Management communication
between CVM and AHV host only
• Virtual standard vSwitch
Linux vnet1 OVS vnet0 tap0 tap1
Bridge
virbr0
br0 OVS bridge
br0 br0
virbr0 bond0 • Storage, host, and VM network
traffic
AHV • Virtual distributed vSwitch
eth3 eth2 eth1 eth0

Physical Physical
Switch 1 Switch 2

109
vSwitch Implementation (AHV) – Ports
and Bonds
M4
M1

Controller User User


VM VM1 VM2 CVMs connected to both standard
vSwitch (vnet1 @ 192.168.5.2)
eth1 eth0 eth0 eth0

and distributed vSwitch


Linux vnet1 OV vnet0 tap0 tap1 (vnet1 @ external_IP_address)
Bridge S
Virbr0 br0
Br0
virbr0 bond0

Acropolis
Hypervisor
eth3 eth2 eth1 eth0

Physical Physical
Switch 1 Switch 2

User VMs connected to distributed


vSwitch through “tapN” virtual NIC
• Tap ports act as bridge connections
for virtual NICs presented to virtual
machines

110
Open vSwitch (OVS) M1
• Open-source software switch designed for a multiserver virtualization environment
• Used by AHV to connect CVM, hypervisor, and VMs to each other and physical network
• OVS package installed by default on each AHV node
• OVS services automatically started when node is started
• Behaves like a Layer 2 learning switch maintaining a MAC address learning table

User VM User VM User VM User VM


eth0 eth0 eth0 eth0
• Hypervisor (host) and VMs connect
to virtual ports

br0 tap0 tap1 br0 tap0 tap1


• OpenFlow protocol used to
Open
Hypervisor Bridge br0 vSwitch Bridge br0 Hypervisor
configure and communicate with
Host 1 eth2 eth3 eth2 eth3
Host 2 Open vSwitch
OVS OVS
Instance Instance • Each hypervisor hosts an OVS
eth1 eth2 eth1 eth2 instance; all instances combine to
form a single (distributed) switch
Port Port Port Port
Physical Switch
111
M1

Viewing Network Details

112
Network Best Practices (AHV) M1
• Configure CVM and AHV in same VLAN
• Configure 10GbE ports as physical ports on
open vSwitch br0
• Configure corresponding switch ports to
trunk ports
• Remove 1GbE ports from the open vSwitch
br0
The Best Practices • Connect all nodes to same Layer 2 network
Guide covers a wide range of • Don’t make any changes to the Linux bridge
networking scenarios that
Nutanix administrators • Don’t remove CVM from open vSwitch br0
encounter. If the defaults and Linux bridge
don't match customer
requirements, refer to the • Enable PortFast
113
Advanced Networking Guide.
Bond Modes Overview M1
• Supported vSwitch bond modes VM1 VM2
CVM
Option 1: Active-backup (active-passive)
Option 2: Balance-slb (active-active) oVS
OVS
Option 3: Balance-TCP (Transmission Control Protocol) 10GbE 10GbE

Refer to the Advanced Networking Guide


for additional information on bonds Sw1 Sw2

114
Load Balancing Modes: Active-Backup M1
Active-passive VM1 VM2
CVM
• Provides only fault tolerance
OVS
• No special hardware required (physical switches
available for redundancy)
oVS A-B
10GbE 10GbE
• CVM and guest VM follow same activity path
• 10GbE (10 Gbps) interface CVM

• Only one NIC actively used for traffic VM1

• No traffic load balancing VM2

Throughput individual
Sw1 Sw2
VMs and OVS: 10 Gbps

115
Load Balancing Modes: Balance-slb M1
Active-active VM1 VM3
CVM

• Traffic load balancing based on source MAC


OVS
address of each guest virtual network interface
(VIF)
oVS Slb
10GbE 10GbE
• Balance-slb benefits from throughput
aggregation and failover, but load balancing can CVM

work well only for a sufficient number of guests


VM1
(or rather VIFs), since traffic from one VIF is
never split between multiple NICs VM3

Note: Both active-backup and balance-slb do not Sw1 Sw2


require configuration on the switch side
116
Load Balancing Modes: Balance-TCP M1
• Balance-TCP (Transmission VM1 VM3
CVM
Control Protocol – TCP 80/443)
OVS
• Preferred load balancing mode for oVS TCP
aggregate throughput 10GbE 10GbE

• All links are active CVM CVM

• Link aggregation (LACP) VM1 VM1

VM3 VM3

Sw1 Sw2

117
Determine uplink bond mode M1

118
M1

AHV Host Networking

119
AHV Host Networking – Default
Configuration
M1
$ allssh manage_ovs --bridge_name br0 show_uplinks

CVM Guest VMs

AHV
Open vSwitch Bridge 0

Bond 0

• Default hypervisor configuration: 2x 1GbE and 2x 10GbE in same Eth2 Eth0 Eth3 Eth1
bridge/bond of Open vSwitch (OVS) 10GbE 1GbE 10GbE 1GbE
• eth0, eth1, eth2, and eth3 of each hypervisor bonded to bond0 Physical Switch 1 Physical Switch 2
• CVM traffic over either 1GbE or 10GbE interface
Note: S1 and S2 are redundant physical switches w/1GbE and 10GbE
adapters
120
Node Interfaces – CVM$ ifconfig M1

CVM external IP address

External virtual IP address

CVM internal IP address


192.168.5.2

CVM internal IP address


192.168.5.254 121
AHV Host Networking: Add Bridge br1 M1
$ allssh ssh root@192.168.5.1 ovs-vsctl add-br br1

CVM Guest VMs

AHV

Bridge Bridge

Bond 0 Bond 1

Recommended configuration
Eth2 Eth0 Eth3 Eth1
• 1GbE and 10GbE links in separate bridges/bonds to 10GbE 1GbE 10GbE 1GbE
ascertain that CVM traffic always uses 10GbE interface Physical Switch 1 Physical Switch 2
• Guest VMs can be connected to either br0 or br1
– For example, Production vs. test or backup
122
AHV Network Dashboard M1

123
Exercises: M1
• Creating an Unmanaged
Network
• Creating a Managed
Network
• Managing an Open
vSwitch (OVS)
• Creating a new OVS

124
Exercise: M1
• Knowledge Check

http://bit.ly/ncptest01

Preguntas 1 - 28

125
M1

Module 5 – VM Creation &


Management
Nutanix ECA 5.5

126
M1

Image Service

127
Image Service – Overview M1

Disk The Image Service feature imports images


images directly into Virtualization Management:
• Create/Delete images
ISOs • Update existing images
• List created images
Any
supported • Get metadata info from an existing image
image
Supports ISOs, disk images, or any image
Virtualization supported by ESXi or Hyper-V
Management
128
Image Service – Supported Disk
Formats
M1

Supported disk formats:


• raw
• vhd (virtual hard disk)
• vmdk (virtual machine disk)
• vdi (Oracle VirtualBox)
• iso (disc image)
• qcow2 (QEMU copy on write)

129
M1

Supported Guest VM Types


for AHV

130
Supported Guest VM Types for AHV M1
OS Types with SCSI Bus Types OS Types with PCI Bus Types

Windows 7, 8.x, 10 RHEL 5.10, 5.11, 6.3


Windows Server 2008/R2, 2012/R2,
CentOS 5.10, 5.11, 6.3
2016
RHEL 6.4-6.9, 7.0-7.4 Ubuntu 12.04
Cent OS 6.4-6.8, 7.0-7.3 SLES 12
Ubuntu 12.04.5, 14.04.x, 16.04.x, 16.10
FreeBSD 9.3, 10.0-10.3, 11
SLES 11 SP3/SP4, 12
Oracle Linux 6.x, 7.x
131
VM pane in Prism M1

132
M1

Nutanix Guest Tools (NGT)

133
Nutanix Guest Tools (NGT) Overview M1
NGT is a software bundle
that is installed in a guest FLR
CLI Nutanix
VM that provides: VM
NGA
Mobility
• Communication with the Drivers
CVM
• File level restore NGT Application
-consistent
capabilities snapshot
VSS for Linux
• Migration between requestor & VMs
hypervisors hardware
provider for
Windows
• App-consistent VMs
snapshots for Linux and
Windows VMs 134
VirtIO Drivers M1
• Mejora
estabilidad y
desempeño de
VMs en AHV
• Sólo para VMs
Windows
• SCSI Driver
• Memory Ballon
Driver
• NIC Driver
135
Acropolis CLI: apagado de VMs M1

136
GPU Support M1
GPU Passthrough
• Allows VM to have full access to GPU
• PCI passthrough enforces security and
isolation
• User configures GPU type and number – AHV
Scheduler matches VM GPU requirements at
VM power on time
• Powered on VMs with GPU assigned are not
live migratable and so user is asked if they
vHW

want to power off for 1-click AHV and BIOS


upgrade operations

vGPU
• A slice of a GPU dedicated to a VM
Host

• GPU commands passed to GPU after being


filtered through nVidia’s Grid driver
• Slightly less performant due to extra driver layer
• Much higher VM density per GPU, up to 16 to 1

137
M1

Module 6 – Health
Monitoring and Alerts
Nutanix ECA 5.5

138
M1

Health Dashboard

139
Overview M1
• The Health dashboard displays dynamically updated health
information about VMs, hosts, and disks in the cluster

Select an Displays a
entity summary
type, of health
then checks
choose a
grouping

Results are displayed


here 140
Actions M1
• Manage Checks
• Set NCC
Frequency
• Run Checks
• Log Collector

141
Run Checks M1
• The Run Checks menu option is
used to manually run a series of
NCC checks
1. Select which checks to run
2. Determine if you want to send
a results report
3. Click Run
• Useful when running a single
check after resolving a problem

142
M1

Analysis Dashboard

143
Overview M1
• The Analysis dashboard allows you to create charts to display
performance metrics from a number of dynamically monitored
entities
The
Alerts
and
The Chart Events
Definition
pane
s pane displays
lists
details on
metric
any alert
and
or event
entity
that
charts
occurred
that can during
be run
the
interval
shown in
The Chart Monitors pane displays
the chart
details for selected charts based on 144
a given time interval
Creating a Chart M1
• No charts are provided by default
• To create a chart:
1. Click New and select the type of chart
2. Supply a name for the chart
3. Add metrics, entity types and entities
4. Click Save

145
M1

Alerts Dashboard

146
Overview M1
• The Alerts dashboard displays alert or event messages, depending on
the selected viewing mode

Events
View
Separate
viewing
modes are
available
for Alerts
and Events

Alerts
View

147
Alerts View M1
• The Alerts View displays each alert, including severity level,
time stamp, related entity and whether the alert has been
acknowledged or resolved
Select one or
more alerts,
then click Use Action
Acknowledge buttons to
or Resolve to configure Alert
take action on policies or
the alert setup Alert
Email
Configuration

148
Redirection, Cause and Resolution M1
• Clicking link for an entity redirects to the dashboard where the alert originated
• Clicking Cause or Resolution will display additional information

149
Filtering Alerts M1
Alerts can be filtered by:
• Severity
o All
o Critical
o Warning
o Info
• Resolution
o All
o Unresolved
o Automatically Resolved
o Manually Resolved
• From/To data range 150
M1

Module 7 – Distributed
Storage Fabric (DSF)
Nutanix ECA 5.5

151
M1

Capacity Optimization – Deduplication

152
Deduplication Process M1
• Streams of data are fingerprinted
CPU CPU CPU CPU
during ingest
Memory Memory Memory Memory o Uses an SHA-1 hast at 16k granularity
Flash HDD Flash HDD Flash HDD Flash HDD o Stored persistently as metadata
• Distributed across all nodes for global
DISTRIBUTED DEDUPLICATON deduplication across entire cluster
Before • 100% software-defined
U R R U U U R
• Offloads SHA-1 fingerprinting to CPU
R R R R R
for minimal overhead
After
• Can result in more space within the
U U U U performance tier, allowing more
U = Unique Blocks | active VM data to fit, resulting in 153
Deduplication Techniques M1
Inline deduplication

• Removes redundant data in performance tier


• Allows more active data, can improve performance to VMs
• Software-driven, leverages hardware-assist capabilities
• Useful for applications with large common working sets

Post-process deduplication

• Reduces redundant data in capacity tier, increasing effective storage


capacity of a cluster
• Global and distributed across all nodes in a cluster
• Useful for virtual desktops (VDI) with full clones

Deduplication works with compression and erasure coding to


optimize capacity efficiency 154


M1

Capacity Optimization –
Compression

155
Compression Overview M7
M1
Compression is a process that reduces the number of bits used
to
represent individual chunks of data
• A data reduction technique that optimizes storage capacity
• Uses an algorithm to determine how to compress data
• Ideal for data that’s written once and read frequently
o File Servers
o Archiving
o Backup
• Can be used in conjunction with Data Deduplication and/or Erasure
coding 156
Compression Process M1
Two compression techniques:
• Inline
o Compresses large or sequential
streams of data as it is written to the
Extent Store
o If data is random, written
uncompressed to the OpLog,
coalesced, then compressed before
being written to the Extent Store
o Uses an extremely fast compression
algorithm (LZ4)
• Post-Process
o Data initially written in uncompressed
o Leverages Curator framework to 157
Compression Technique Comparison M1
Inline compression

• Data compressed as it’s written


• LZ4, an extremely fast compression algorithm
Post-process (offline) compression

• MapReduce compresses data after “cold” data is migrated to lower-


performance storage
• LZ4HC, a high-compression version of LZ4 algorithm
o Slower than inline
o Compressed files decode faster
• No impact on normal I/O path
• Ideal for random-access batch workloads
158
Workloads Less Suitable for Compression M7
M1
• Applications performing native data compression,
including JPEG or MPEG
• Systems featuring native compression such as SQL server
databases
• Workloads generating heavy random write operations
• Workloads that frequently update data (CAD)
• Data already storage optimized, like VCAI snapshots, linked
clones, and so forth

159
M1

Capacity Optimization –
Erasure Coding

160
Erasure Coding-X (EC-X) Overview M1

Erasure Coding is a method of data protection that


breaks data into blocks, then expands and encodes
the block with redundant data
• Optimizes data storage while still providing the
ability to tolerate multiple failures
• Similar in concept to RAID parity calculation
• Encodes a strip of data blocks on different nodes
and calculates parity based on the configured
replication factor (RF)
161
EC-X Compared to Traditional RAID M1
Traditional RAID Erasure coding (EC-X)
• Bottleneck by single disk • Keeps resiliency unchanged
• Slow rebuilds • Optimizes availability (fast rebuilds)
• Hardware-defined • Uses resources of entire cluster
• Hot spares waste space • Increases usable storage capacity

RAID-5, RAID-6, RAID-DP on disks EC-X across nodes

EC-X increases effective or usable capacity on a cluster


• Savings after enabling EC-X is in addition to deduplication and
compression storage 162
EC-X Process M1

VM VM VM VM VM
• Erasure Coding is performed
post-process
• Leverages Curator MapReduce
Strip Parity
framework
o Curator finds eligible extent
Parity Parity
Strip groups available for coding
o Must be “write-cold”
Node Node Node Node Node o Distributes coding tasks via
Chronos
RF2 and RF3 data whose primary copies • Does not affect tradition write
are local and replicas are distributed I/O path
throughout the cluster 163
EC-X Workloads M1

• Ideal Not ideal


• Workloads without high • Write/overwrite-
I/O requirements; write- intensive loads that add
once, read-many (WORM) overhead for software-
workloads defined storage
Examples: Backups, Example: Write-intensive
archives, file servers, log processes such as VDI
servers, email (depending
on use)
164
Erasure Coding Best Practices M7
M1
• A cluster must have at least four nodes in order for
erasure coding to be enabled
• Do not use erasure coding on datasets with
many overwrites
o Optimal for snapshots, file server archives, backups and
other “cold” data
• Read performance may be degraded during failure
scenarios
• Erasure coding is a backend job; achieving savings
might take time 165
M1

Module 9 – Acropolis Services


Nutanix ECA 5.5

166
M1

Acropolis Block Services (ABS)

167
What is Acropolis Block Services (ABS)? M1

• Native scale-out
VM VM storage solution
• Enables cluster to
provide block storage
or LUMs
• Provides access via
Volume 1 Volume 2 Volume 3 iSCSI
Volume Group • Presents LUMs as
vDisks

168
ABS Use Cases M1
• Shared Disks (Oracle RAC, Microsoft Failover
Clustering)
• Disks as first-class entities—execution contexts are
ephemeral and data is critical
• Guest-initiated iSCSI supports bare-metal consumers
and Microsoft Exchange on vSphere

169
What is Acropolis File Services (AFS)? M1
• Lets users leverage ADFS as a highly available server
• Provides single namespace for users to store home
directories and files
• Supported by AHV and ESXi
• Supports CIFS 2.1

170
File Services Constructs M1

• Each file server (high level


Cluster
namespace) has a set of file
services VMs deploy
File File
Server Server
• Shares are exposed to users
Share Share Share • A file server can have
multiple shares
Folder Folder • Folders are for storing files

171
File Services Constructs - Details M1
The file services feature is
composed of multiple File • Minimum of 3 FSVMs deployed
Services VMs (FSVM) for • FSVMs run as agent VMs and
distribution and scale
are transparently deployed
Client(s)

File Server

FSVM FSVM FSVM

Folder Folder
DFS referrals
redirect
requests to the
hosting FSVM 172
M1

Module 10 – Data Resiliency


Nutanix ECA 5.5

173
M1

Hardware Unavailability

174
Node Failure M1
Node A Node B Node C

Controller Controller Controller


Guest VM Guest VM Guest VM
VM VM VM

Hypervisor Hypervisor Hypervisor

Storage Storage Storage

In the event of a host failure the VMs previously running


on that host are restarted on other healthy nodes
throughout the cluster
175
Controller VM Unavailability M1
Offline CVM Node Node

Controller Controller Controller


Guest VM
VM VM VM

Hypervisor Hypervisor Hypervisor

Storage Storage Storage

Causes include AOS-code upgrade, accidental CVM


delete, CVM power-off, and CVM failure
176
Drive Failure M1
• Cold-tier data-HDDs Single HDD drive failure
won’t result in data loss
• Performance-sensitive data
• Replication factor DSF I/O Path
Read I/O
SSD HDD Write I/O
Sequential Random Unified Memory
Cache
Nutanix Home Curator
OpLog
SSD
Cassandra Drain Cache

Extent Store
Oplog
HDD
Unified Cache

Extent Store Extent Store Cloud NAS, etc. Extensible

177
Block Failure M1

• Block Awareness (min. of 3 blocks)


• Block Fault Tolerance (Best Effort)

Block Oplog

Block ZK Oplog

Block

178
M1

Redundancy Factor

179
Redundancy Factor M1

• Default – Redundancy Factor 2


• Redundancy Factor 3 Requirements:
- A cluster must have at least five nodes
- Data must be stored in containers with RF3
- CVMs require a minimum of 24 GB memory

180
Redundancy Factor 3 M1
6-Node Cluster RF3 2-Node Failure

Oplog Oplog Oplog Oplog

Oplog Oplog Oplog

Oplog Oplog Oplog

Cassandra = Metadata (M) Zookeeper (ZK) = Config data

Block M Block ZK ZK

Block M M Block ZK ZK ZK

Block M M Block ZK ZK

181
VM Flash Mode: Introduction M1
Configure VM Flash Mode:

• Option 1: Entire virtual disk in Hot Tier


• Option 2: Portion of the virtual disk in Hot Tier
Information Lifecycle Management (ILM) Service
automatically moves data between Hot and Cold Tiers
Day 1 Day 1 + X
HOT Tier HOT Tier
Important Frequently-
used Data
Infrequently-used Data

COLD Tier COLD Tier

182
VM Flash Mode – Considerations M1
• Reduces DSF’s ability to manage workloads dynamically
• No specific partition set aside; unused space has other purposes
• Clones and remote copies don’t preserve VM Flash Mode Config
• Requires Ultimate license: see here

183
M1

Module 11 – Data Protection


Nutanix ECA 5.5

184
M1

Data Protection

185
VM-centric data protection M1
• Native (on-site) and remote data replication
• Local replication Primary cluster

• Remote replication
• RPO and RTO considerations
Location 1 Location 2
Protection domain 1
Remote site
Protection domain 2
Remote site
Remote site

Protection domain 2 Protection Protection


Location 3 domain 3 domain 4
Vdisk Local VM-centric snapshots
Remote site Remote site

186
Time stream (local snapshots) M1
Use Cases:

Primary cluster
• Protection from
guest-OS corruption
• Snapshot VM
environment
• Self-service file-level
restore
Vdisk Local VM-centric snapshots
Byte-level resolution
Nutanix snapshots

187
Asynchronous Replication M1

Location 1 Location 2
• Characteristics:
Protection domain 1
Remote site • Multi-Disaster Recovery
Protection domain 2
Remote site topologies
Remote site

Protection domain 2 Protection Protection


• Multiple retention policies
Location 3
• VM and app-level consistency
domain 3 domain 4

Remote site Remote site


• Scheduler
• Use Cases:
• Proprietary technology replicates between • Protection from VM
Nutanix clusters (Scheduler defines corruption and deletion
frequency of snapshots) • Protection from total site
• 3rd-party tools available for replication failure
between Nutanix clusters and other systems.
188
Synchronous Replication M1

Datastore A (active) Datastore A (active)

Real-time
replication of
data between
sites

189
NearSync M1

• One-minute minimum Recovery Point Objective (RPO) -


max 60 minutes
• Light-weight snapshot (LWS) technology for high-
granularity resolution
• Prism supports familiar protection workflows
• Uses markers instead of vDisk-based snapshots
• Operations log-based LWS never lands on HDDs
• User can select minute snapshots for restores

190
Cloud Connect M1
Characteristics:
• Hybrid NTNX Cloud solution
• Integration with Azure and
AWS
• Interoperability with NTNX DR
solutions
Use Cases:
• Archiving
• Backup—not Disaster
Recovery

191
For a visual explanation, you can
Protection Strategies watch the following video: link M1

Two-way Protection domain B Many-to-One Protection domain B

mirroring topology Protection domain A


Protection domain A
in C
oma
nd
t ectio in D
Pro oma
nd
t ectio
Pro

Protection domain B
One-to-many Many-to-Many Protection domain B

topology Protection domain A topology Protection domain A


Pro
tec
tion
dom
Pro ain
tec C
tion
dom
ain
A
Protection domains
C, D, E, F
192
Data Replication Setup Steps M1
1. Create container, VMs and other components 4. Create container
2. Create Protection Domain and add entities1 5. Configure remote site
3. Set up schedule and invoke retention policies
Local cluster Remote cluster
Remote site; Point
Container “X” Container “Y”
Protection Domain to remote cluster
123
VM ‘Z’ VM ‘L’
Remote site; Point
VM ‘A’ to local cluster

Note: Entities are VMs, files, or volume groups


193
M1

Protection Domains

194
Protection Domain (PD): Concepts M1
• Async Disaster Recovery
Consistency group Consistency group

VM(s) File(s) … VM(s) File(s)


• Sync/Metro Protection
Domain
Protection domain
• Consistency Group
• Schedule
• Snapshot

195
Gestion Protection Domain via CLI M1

196
M1

Remote Sites

197
Disaster Recovery: Configure cluster as
remote site (1) M1
Remote site can be a physical cluster or

3
1 2 on the Cloud (AWS or Azure)
• Create a Remote Site:
1. From PRISM, choose Data Protection
dashboard
2. Click +Remote Site
4 3. Select Physical Cluster
4. Enter cluster name
5 5. Select Backup or Disaster Recovery
6 6. “IP Address” is external virtual IP address
of cluster that’s the replication target
7. Click Add Site
198
Optimización tráfico replicación M1

• Compress on
Wire
• Bandwidth
Throttling

199
Protection Domain failover M1

200
M1

Module 12: Prism Central


Nutanix ECA 5.5

201
M1

Prism Central Overview

202
Self-Service Portal (SSP) M12
M1
• Self-service access to
resources
• Now on Prism Central
(formerly Prism
Element)
• Requires AOS 5.5 or
later
• Create and manage VMs
• A DIY (Do It Yourself)
portal
• Uses resources from
single AHV cluster

203
Proceso para restaurar usando SSR M7
M1

• Seleccionar y montar el
snapshot
• Iniciar la utilidad de SSR en la
VM
• Mover los archivos/folder
necesarios del disco snapshot
al disco original

204
M1

Module 13 – Cluster Maintenance


Nutanix ECA 5.5

205
M1

Cluster Verification

206
Checking the Status of Cluster Services M1
$ cluster status

207
Cluster Verification M1
• After installing the cluster, you need to verify that it is
set up correctly using NCC
$ ncc health_checks run_all
• NCC checks for common misconfiguration problems,
such as using 1GbE NICs instead of the 10GbE NICs, and
verifies that settings are correct.Cluster Verification

208
M1

Support Resources

209
Support Portal M1

Para abrir caso:

• NCC Output

• Log Bundle

• Licensing Info

210
M1

Module 14 – Life Cycle


Operations
Nutanix ECA 5.5

211
M1

License Management

212
Cluster Licensing Considerations M1
• Nutanix nodes and blocks are delivered with a default
Starter license that does not expire
• An administrator must install a license after creating a
cluster for Pro and Ultimate licensing
• Reclaim licenses before destroying a cluster
• Ensure consistent licensing for all nodes in a cluster
(Nodes with different licensing default to minimum
feature set)

213
Portal Connection for License Management M1
• Simplifies licensing by integrating license workflow in Prism
• After configuration, an administrator can perform most
licensing tasks throughout Prism
• Eliminates the need to log in to Nutanix Support Portal
• Communicates with Nutanix Support Portal to detect
changes to the status of a cluster license
• Disabled by default
• Requires outbound connectivity to portal.nutanix.com: 443

214
Before managing Nutanix licensing M1

• Ensure that there is a cluster created


• Reclaim licenses before destroying a cluster

215
M1

Software and Firmware


Upgrades

216
Software and Firmware Upgrades M1
Nutanix provides a mechanism to upgrade

• Software
• Add-on features
• NCC
• Firmware
These upgrades are done in a non-intrusive, rolling

manner, 1 node at a time, so that a running cluster can be


upgraded without loss of services.

217
Hypervisor Upgrade Overview and
Requirements
M1
Do one of the following:

• Run the Nutanix Cluster Check (NCC) health checks from any
CVM in the cluster
• Download the available hypervisor software from the vendor
and the metadata file (JSON) from the Nutanix Support Portal
• If you are upgrading AHV, you can download the binary bundle
from the Nutanix Support Portal
• Upload the software and metadata through Upgrade Software

218
M1

Gracias!

219

Вам также может понравиться