Академический Документы
Профессиональный Документы
Культура Документы
1
M1
2
Agenda M1
1. Introducing the Nutanix Enterprise 8. AHV Workload Migration
Cloud
9. Acropolis Services
2. Managing a Nutanix Cluster
10. Data Resiliency
3. Securing a Nutanix Cluster
11. Data Protection
4. Networking
12. Prism Central
5. VM Creation and Management
13. Cluster Maintenance
6. Health Monitoring and Alerts
14. Life Cycle Operations
7. Distributed Storage Fabric
3
M1
6
What is the Nutanix Enterprise Cloud? M1
A full set of infrastructure services delivered
•
A node consists of a
single physical host
and a Controller VM
(CVM) A block consists of one
to four nodes mounted
in a chassis, sharing
power, backplane and
cooling A cluster consists of multiple nodes
(typically in blocks) converged to
deliver a unified pool of infrastructure
resources
10
Node and Block Example M1
Node
Block
11
Cluster Example M1
• Joining multiple nodes in a cluster allows for the pooling of resources
o A Nutanix Enterprise Cloud consists of a cluster containing 3 or more nodes
• Storage is presented as a single pool via the Controller VM (CVM)
Node 1 Node 2 Node N
12
Creating a Cluster with Different Products M1
Nodes in cluster can be from different
vendors or processor versions
• Always check the Compatibility Matrix
• G4 and G5 nodes can be in the same
cluster, but not on the same block
• Hardware from different vendors cannot
be in the same cluster
• Nodes with Self-Encrypting Drives (SED)
and standard HDD/SDD can be in the
same cluster
Support
Portal/Documentation 13
Software-Only Features – Acropolis M1
Acropolis Software Editions
STARTER PRO ULTIMATE
Storage Concepts
17
Enterprise Cloud Storage Components M1
Storage in a Nutanix Enterprise Cloud is represented by
•
Storage Pool
Storage Pool
Storage Pool
Acropolis Overview
22
Acropolis Components M1
• Acropolis Hypervisor
Comprehensive, Enterprise-class Virtualization
24
I/O Path M1
• Oplog DSF I/O Path
Read I/O
• Unified cache Sequential Random
Unified Memory Write I/O
Cache
33
Write I/Os M1
Write I/O
CVM
Local SSDs not full
Local SSDs full
1. Local SSD
Data migration
2. Remote SSD
4. Remote HDD
Note: Sequential I/O can bypass SSD and write directly to HDD
34
Data Locality and Live Migration M1
• Utilization Monitoring
• Live Migration
• Post Migration
User VMs User VMs User VMs
DSF
35
M1
Resources
37
Support Portal – Documentation M1
41
Support Portal – Knowledgebase M1
42
M1
49
AHV Turbo M1
• IO path optimization
• Remote Direct
Guest Memory Access
(RDMA)
• New IO Subsystem
Host designed to deliver
increased
HW
performance
50
NearSync Replication M1
• New replication method in addition
to asynchronous and synchronous
o Provides better protection for
mission-critical applications
• Able to achieve 1-minute RPO
times
• VM/Application level retention and
restores
• No latency/distance restrictions
51
NearSync Replication – Light-weight
Snapshots M1
• Continuous Data Protection like technology
• LWS Store located on SSD
• Snapshots are replicated to remote system
• User can restore from any available LWS, no matter
how minute
• Continue using Protection Domain-based workflows
o Introduce time-based retention policies
o Automatically creates multiple (roll-up) schedules
52
Xtract M1
• Automates the MANY steps required to manually migrate or
rebuild VMs and applications to new infrastructure
o Xtract for VMs – Automates “lift and shift” VM migrations
o Xtract for DBs – Migrates full DB instances at the application level
54
M1
Cluster Components
55
Cluster Components M1
Gestión y
mantenimiento del
cluster (MapReduce)
56
M1
Module 2 - Managing
a Nutanix Cluster
Nutanix ECA 5.5
58
Management Interfaces - Overview M1
Several methods to manage a Nutanix implementation:
• Graphical UI – Prism Element and Prism Central
o Preferred method for management
o Manage entire environment (when using Prism Central)
• Command Line Interfaces
o nCLI – Get status and configure entities within a cluster
o ACLI – Manage the Acropolis portion of the Nutanix environment
• Nutanix PowerShell Cmdlets – For use with Windows PowerShell
• REST API – Exposes all GUI components for orchestration and
automation
59
M1
60
Accessing Prism Element M1
• First-time Login:
• Point a browser to http://management_ip_addr
o Browser will redirect to port 9440
o An SSL warning may appear
o If a Welcome screen displays, accept the terms
• Enabled by default 63
Initial Configuration – Pulse Settings M1
Pulse is configured by selecting it
from the Gear
icon:
• Enable/Disable
• Set Email Recipients
• Configure Verbosity
o Basic: Collect basic statistics only
o Basic with Coredump: Collect basic
statistics plus core dump data
64
What gets Shared by Pulse? M1
Information collected and
• Information NOT shared with
•
66
Command Line Interfaces – Overview M1
Run system administration commands against a Nutanix cluster
from:
• A local machine
• Any CVM in the cluster
Two CLIs:
• nCLI – Get status and configure entities within a cluster
• aCLI – Manage hosts, networks, snapshots and VMs the Acropolis
portion of the Nutanix environment
Acropolis 5.5 Command Reference Guide
• Contains nCLI, aCLI and CVM commands
67
Command Line Interfaces - nCLI M1
• Download the nCLI installer to a local machine
o Preferred method, nCLI commands can also be run from a CVM
• Open a command prompt (bash or CMD)
• Enter:
ncli -s management_ip_addr -u 'username' -p 'user_password'
68
nCLI Entities and Parameters M1
71
aCLI Examples M1
• Clone a VM:
vm.clone testClone clone_from_vm=Edu01VM
72
M1
PowerShell Cmdlets
73
PowerShell Cmdlets – Overview M1
Nutanix provides a set of PowerShell Cmdlets to
perform system administration tasks using
PowerShell
• Install PowerShell v2 and .NET framework 4
• Download the installer from Prism Element
• A shortcut, NutanixCmdlets, is created
• Launching the shortcut opens a PowerShell window
with the Cmdlets loaded
• Automates common operational tasks on Nutanix
clusters
• API Reference available on Support Portal
74
M1
REST API
75
REST API – Overview M1
77
Sample API Call – Get VMs M1
1. Select an entity and operation
2. Provide parameters
3. Click Try it out!
4. View response
78
Sample API Call – Create a Network M1
80
M1
Module 3 – Securing a
Nutanix Cluster
Nutanix ECA 5.5
81
Security Features M1
• Two-factor authentication
• Cluster lockdown
• Key management and
administration
• STIG Implementation
• Data at Rest Encryption
82
Two Factor Authentication M1
• Logons require a combination of a client
certificate and username and password
84
Key Management and Administration M1
• A key management server is used to authenticate
Nutanix nodes
to the KMS
86
STIG SCMA Monitoring M1
Security Configuration Management
Automation (SCMA)
• Monitors over 800 security entities covering
storage, virtualization and management
Note: Detailed information and hands-on labs are part of the Advanced
Administration and Performance Monitoring course
87
Data at Rest Encryption (DARE) M1
Secures data while at rest using self-
encrypting drives and key-based access
management
• Data is encrypted on all drives at all times
• Data inaccessible in the event of drive or node
theft
• Data on a drive can be securely destroyed
• Key authorization enables password rotation at
arbitrary times
• Protection can be enabled or disabled at any time
• No performance penalty is incurred despite
encrypting all data 88
DARE Implementation M1
1. Install SEDs for all data drives in a cluster
o Drives are FIPS 140-2 Level 2 validated and use validated cryptographic modules
4. If a node experiences a power failure, the CVM retrieves keys from the KMS
once it is back online
o If a drive is re-seated, it becomes locked
o If a drive is stolen, the data is inaccessible
o If a node is stolen, the KMS can revoke the node certificates to prevent data access
89
Configuring DARE in Prism Element M1
• DARE Configuration settings can
be accessed from the Gear menu
• If DARE is not on the menu, the
nodes don’t contain SIDs
Category: Web Category: Shared Apps Category: Web Category: Shared Apps
IN: Any HTTP/S IN: App, DB IN: Any HTTP/S IN: App, DB
OUT: App OUT: None OUT: App OUT: None
93
Security Policies (Prism Central + Flow)
M1
94
Exercise: M1
• Knowledge Check
95
M1
Configuring Authentication
96
Configuring Authentication – Overview M1
• Adding a Directory List
• Enabling Client Authentication
• Regenerating a Self-Signed Certificate
• Importing a Certificate
97
Adding a Directory List M1
To add an authentication
directory:
1. Click the Directory List
tab
2. Click the New Directory
button
3. Select a Directory Type
from the drop-down
menu
Active Directory
Settings 4. Enter required details
OpenLDAP Settings
5. Click Save
98
Enabling Client Authentication M1
Effectively enables two-factor
authentication, by ensuring that
the user provides a valid
certificate
To enable client authentication:
1. Click the Client tab
2. Select the Configure Client
Chain Certificate check box
3. Click Choose File
4. Select a Client Chain Certificate Client Authentication Configuration
file for upload
5. Click Open 99
Role Mapping M1
100
M1
Module 4 – Networking
Nutanix ECA 5.5
101
M1
Networking Overview
102
Providing Network Connectivity to VMs M1
• Two methods:
106
Creating the Backplane Network M1
To create the backplane
network, first click Configure for
eth2 in the Network
Configuration window, then:
1. Provide a non-routable subnet
2. Specify the netmask
2. Provide a VLAN ID if assigning
the interface to a VLAN
3. Click Verify and Save
vSwitch Implementation
108
vSwitch Implementation (AHV)
Overview
M1
Physical Physical
Switch 1 Switch 2
109
vSwitch Implementation (AHV) – Ports
and Bonds
M4
M1
Acropolis
Hypervisor
eth3 eth2 eth1 eth0
Physical Physical
Switch 1 Switch 2
110
Open vSwitch (OVS) M1
• Open-source software switch designed for a multiserver virtualization environment
• Used by AHV to connect CVM, hypervisor, and VMs to each other and physical network
• OVS package installed by default on each AHV node
• OVS services automatically started when node is started
• Behaves like a Layer 2 learning switch maintaining a MAC address learning table
112
Network Best Practices (AHV) M1
• Configure CVM and AHV in same VLAN
• Configure 10GbE ports as physical ports on
open vSwitch br0
• Configure corresponding switch ports to
trunk ports
• Remove 1GbE ports from the open vSwitch
br0
The Best Practices • Connect all nodes to same Layer 2 network
Guide covers a wide range of • Don’t make any changes to the Linux bridge
networking scenarios that
Nutanix administrators • Don’t remove CVM from open vSwitch br0
encounter. If the defaults and Linux bridge
don't match customer
requirements, refer to the • Enable PortFast
113
Advanced Networking Guide.
Bond Modes Overview M1
• Supported vSwitch bond modes VM1 VM2
CVM
Option 1: Active-backup (active-passive)
Option 2: Balance-slb (active-active) oVS
OVS
Option 3: Balance-TCP (Transmission Control Protocol) 10GbE 10GbE
114
Load Balancing Modes: Active-Backup M1
Active-passive VM1 VM2
CVM
• Provides only fault tolerance
OVS
• No special hardware required (physical switches
available for redundancy)
oVS A-B
10GbE 10GbE
• CVM and guest VM follow same activity path
• 10GbE (10 Gbps) interface CVM
Throughput individual
Sw1 Sw2
VMs and OVS: 10 Gbps
115
Load Balancing Modes: Balance-slb M1
Active-active VM1 VM3
CVM
VM3 VM3
Sw1 Sw2
117
Determine uplink bond mode M1
118
M1
119
AHV Host Networking – Default
Configuration
M1
$ allssh manage_ovs --bridge_name br0 show_uplinks
AHV
Open vSwitch Bridge 0
Bond 0
• Default hypervisor configuration: 2x 1GbE and 2x 10GbE in same Eth2 Eth0 Eth3 Eth1
bridge/bond of Open vSwitch (OVS) 10GbE 1GbE 10GbE 1GbE
• eth0, eth1, eth2, and eth3 of each hypervisor bonded to bond0 Physical Switch 1 Physical Switch 2
• CVM traffic over either 1GbE or 10GbE interface
Note: S1 and S2 are redundant physical switches w/1GbE and 10GbE
adapters
120
Node Interfaces – CVM$ ifconfig M1
AHV
Bridge Bridge
Bond 0 Bond 1
Recommended configuration
Eth2 Eth0 Eth3 Eth1
• 1GbE and 10GbE links in separate bridges/bonds to 10GbE 1GbE 10GbE 1GbE
ascertain that CVM traffic always uses 10GbE interface Physical Switch 1 Physical Switch 2
• Guest VMs can be connected to either br0 or br1
– For example, Production vs. test or backup
122
AHV Network Dashboard M1
123
Exercises: M1
• Creating an Unmanaged
Network
• Creating a Managed
Network
• Managing an Open
vSwitch (OVS)
• Creating a new OVS
124
Exercise: M1
• Knowledge Check
http://bit.ly/ncptest01
Preguntas 1 - 28
125
M1
126
M1
Image Service
127
Image Service – Overview M1
• raw
• vhd (virtual hard disk)
• vmdk (virtual machine disk)
• vdi (Oracle VirtualBox)
• iso (disc image)
• qcow2 (QEMU copy on write)
129
M1
130
Supported Guest VM Types for AHV M1
OS Types with SCSI Bus Types OS Types with PCI Bus Types
132
M1
133
Nutanix Guest Tools (NGT) Overview M1
NGT is a software bundle
that is installed in a guest FLR
CLI Nutanix
VM that provides: VM
NGA
Mobility
• Communication with the Drivers
CVM
• File level restore NGT Application
-consistent
capabilities snapshot
VSS for Linux
• Migration between requestor & VMs
hypervisors hardware
provider for
Windows
• App-consistent VMs
snapshots for Linux and
Windows VMs 134
VirtIO Drivers M1
• Mejora
estabilidad y
desempeño de
VMs en AHV
• Sólo para VMs
Windows
• SCSI Driver
• Memory Ballon
Driver
• NIC Driver
135
Acropolis CLI: apagado de VMs M1
136
GPU Support M1
GPU Passthrough
• Allows VM to have full access to GPU
• PCI passthrough enforces security and
isolation
• User configures GPU type and number – AHV
Scheduler matches VM GPU requirements at
VM power on time
• Powered on VMs with GPU assigned are not
live migratable and so user is asked if they
vHW
vGPU
• A slice of a GPU dedicated to a VM
Host
137
M1
Module 6 – Health
Monitoring and Alerts
Nutanix ECA 5.5
138
M1
Health Dashboard
139
Overview M1
• The Health dashboard displays dynamically updated health
information about VMs, hosts, and disks in the cluster
Select an Displays a
entity summary
type, of health
then checks
choose a
grouping
141
Run Checks M1
• The Run Checks menu option is
used to manually run a series of
NCC checks
1. Select which checks to run
2. Determine if you want to send
a results report
3. Click Run
• Useful when running a single
check after resolving a problem
142
M1
Analysis Dashboard
143
Overview M1
• The Analysis dashboard allows you to create charts to display
performance metrics from a number of dynamically monitored
entities
The
Alerts
and
The Chart Events
Definition
pane
s pane displays
lists
details on
metric
any alert
and
or event
entity
that
charts
occurred
that can during
be run
the
interval
shown in
The Chart Monitors pane displays
the chart
details for selected charts based on 144
a given time interval
Creating a Chart M1
• No charts are provided by default
• To create a chart:
1. Click New and select the type of chart
2. Supply a name for the chart
3. Add metrics, entity types and entities
4. Click Save
145
M1
Alerts Dashboard
146
Overview M1
• The Alerts dashboard displays alert or event messages, depending on
the selected viewing mode
Events
View
Separate
viewing
modes are
available
for Alerts
and Events
Alerts
View
147
Alerts View M1
• The Alerts View displays each alert, including severity level,
time stamp, related entity and whether the alert has been
acknowledged or resolved
Select one or
more alerts,
then click Use Action
Acknowledge buttons to
or Resolve to configure Alert
take action on policies or
the alert setup Alert
Email
Configuration
148
Redirection, Cause and Resolution M1
• Clicking link for an entity redirects to the dashboard where the alert originated
• Clicking Cause or Resolution will display additional information
149
Filtering Alerts M1
Alerts can be filtered by:
• Severity
o All
o Critical
o Warning
o Info
• Resolution
o All
o Unresolved
o Automatically Resolved
o Manually Resolved
• From/To data range 150
M1
Module 7 – Distributed
Storage Fabric (DSF)
Nutanix ECA 5.5
151
M1
152
Deduplication Process M1
• Streams of data are fingerprinted
CPU CPU CPU CPU
during ingest
Memory Memory Memory Memory o Uses an SHA-1 hast at 16k granularity
Flash HDD Flash HDD Flash HDD Flash HDD o Stored persistently as metadata
• Distributed across all nodes for global
DISTRIBUTED DEDUPLICATON deduplication across entire cluster
Before • 100% software-defined
U R R U U U R
• Offloads SHA-1 fingerprinting to CPU
R R R R R
for minimal overhead
After
• Can result in more space within the
U U U U performance tier, allowing more
U = Unique Blocks | active VM data to fit, resulting in 153
Deduplication Techniques M1
Inline deduplication
•
Post-process deduplication
•
Capacity Optimization –
Compression
155
Compression Overview M7
M1
Compression is a process that reduces the number of bits used
to
represent individual chunks of data
• A data reduction technique that optimizes storage capacity
• Uses an algorithm to determine how to compress data
• Ideal for data that’s written once and read frequently
o File Servers
o Archiving
o Backup
• Can be used in conjunction with Data Deduplication and/or Erasure
coding 156
Compression Process M1
Two compression techniques:
• Inline
o Compresses large or sequential
streams of data as it is written to the
Extent Store
o If data is random, written
uncompressed to the OpLog,
coalesced, then compressed before
being written to the Extent Store
o Uses an extremely fast compression
algorithm (LZ4)
• Post-Process
o Data initially written in uncompressed
o Leverages Curator framework to 157
Compression Technique Comparison M1
Inline compression
•
159
M1
Capacity Optimization –
Erasure Coding
160
Erasure Coding-X (EC-X) Overview M1
VM VM VM VM VM
• Erasure Coding is performed
post-process
• Leverages Curator MapReduce
Strip Parity
framework
o Curator finds eligible extent
Parity Parity
Strip groups available for coding
o Must be “write-cold”
Node Node Node Node Node o Distributes coding tasks via
Chronos
RF2 and RF3 data whose primary copies • Does not affect tradition write
are local and replicas are distributed I/O path
throughout the cluster 163
EC-X Workloads M1
166
M1
167
What is Acropolis Block Services (ABS)? M1
• Native scale-out
VM VM storage solution
• Enables cluster to
provide block storage
or LUMs
• Provides access via
Volume 1 Volume 2 Volume 3 iSCSI
Volume Group • Presents LUMs as
vDisks
168
ABS Use Cases M1
• Shared Disks (Oracle RAC, Microsoft Failover
Clustering)
• Disks as first-class entities—execution contexts are
ephemeral and data is critical
• Guest-initiated iSCSI supports bare-metal consumers
and Microsoft Exchange on vSphere
169
What is Acropolis File Services (AFS)? M1
• Lets users leverage ADFS as a highly available server
• Provides single namespace for users to store home
directories and files
• Supported by AHV and ESXi
• Supports CIFS 2.1
170
File Services Constructs M1
171
File Services Constructs - Details M1
The file services feature is
composed of multiple File • Minimum of 3 FSVMs deployed
Services VMs (FSVM) for • FSVMs run as agent VMs and
distribution and scale
are transparently deployed
Client(s)
File Server
Folder Folder
DFS referrals
redirect
requests to the
hosting FSVM 172
M1
173
M1
Hardware Unavailability
174
Node Failure M1
Node A Node B Node C
Extent Store
Oplog
HDD
Unified Cache
177
Block Failure M1
Block Oplog
Block ZK Oplog
Block
178
M1
Redundancy Factor
179
Redundancy Factor M1
180
Redundancy Factor 3 M1
6-Node Cluster RF3 2-Node Failure
Block M Block ZK ZK
Block M M Block ZK ZK ZK
Block M M Block ZK ZK
181
VM Flash Mode: Introduction M1
Configure VM Flash Mode:
•
182
VM Flash Mode – Considerations M1
• Reduces DSF’s ability to manage workloads dynamically
• No specific partition set aside; unused space has other purposes
• Clones and remote copies don’t preserve VM Flash Mode Config
• Requires Ultimate license: see here
183
M1
184
M1
Data Protection
185
VM-centric data protection M1
• Native (on-site) and remote data replication
• Local replication Primary cluster
• Remote replication
• RPO and RTO considerations
Location 1 Location 2
Protection domain 1
Remote site
Protection domain 2
Remote site
Remote site
186
Time stream (local snapshots) M1
Use Cases:
•
Primary cluster
• Protection from
guest-OS corruption
• Snapshot VM
environment
• Self-service file-level
restore
Vdisk Local VM-centric snapshots
Byte-level resolution
Nutanix snapshots
187
Asynchronous Replication M1
Location 1 Location 2
• Characteristics:
Protection domain 1
Remote site • Multi-Disaster Recovery
Protection domain 2
Remote site topologies
Remote site
Real-time
replication of
data between
sites
189
NearSync M1
190
Cloud Connect M1
Characteristics:
• Hybrid NTNX Cloud solution
• Integration with Azure and
AWS
• Interoperability with NTNX DR
solutions
Use Cases:
• Archiving
• Backup—not Disaster
Recovery
191
For a visual explanation, you can
Protection Strategies watch the following video: link M1
Protection domain B
One-to-many Many-to-Many Protection domain B
Protection Domains
194
Protection Domain (PD): Concepts M1
• Async Disaster Recovery
Consistency group Consistency group
195
Gestion Protection Domain via CLI M1
196
M1
Remote Sites
197
Disaster Recovery: Configure cluster as
remote site (1) M1
Remote site can be a physical cluster or
•
3
1 2 on the Cloud (AWS or Azure)
• Create a Remote Site:
1. From PRISM, choose Data Protection
dashboard
2. Click +Remote Site
4 3. Select Physical Cluster
4. Enter cluster name
5 5. Select Backup or Disaster Recovery
6 6. “IP Address” is external virtual IP address
of cluster that’s the replication target
7. Click Add Site
198
Optimización tráfico replicación M1
• Compress on
Wire
• Bandwidth
Throttling
199
Protection Domain failover M1
200
M1
201
M1
202
Self-Service Portal (SSP) M12
M1
• Self-service access to
resources
• Now on Prism Central
(formerly Prism
Element)
• Requires AOS 5.5 or
later
• Create and manage VMs
• A DIY (Do It Yourself)
portal
• Uses resources from
single AHV cluster
203
Proceso para restaurar usando SSR M7
M1
• Seleccionar y montar el
snapshot
• Iniciar la utilidad de SSR en la
VM
• Mover los archivos/folder
necesarios del disco snapshot
al disco original
204
M1
205
M1
Cluster Verification
206
Checking the Status of Cluster Services M1
$ cluster status
207
Cluster Verification M1
• After installing the cluster, you need to verify that it is
set up correctly using NCC
$ ncc health_checks run_all
• NCC checks for common misconfiguration problems,
such as using 1GbE NICs instead of the 10GbE NICs, and
verifies that settings are correct.Cluster Verification
208
M1
Support Resources
209
Support Portal M1
• NCC Output
• Log Bundle
• Licensing Info
210
M1
211
M1
License Management
212
Cluster Licensing Considerations M1
• Nutanix nodes and blocks are delivered with a default
Starter license that does not expire
• An administrator must install a license after creating a
cluster for Pro and Ultimate licensing
• Reclaim licenses before destroying a cluster
• Ensure consistent licensing for all nodes in a cluster
(Nodes with different licensing default to minimum
feature set)
213
Portal Connection for License Management M1
• Simplifies licensing by integrating license workflow in Prism
• After configuration, an administrator can perform most
licensing tasks throughout Prism
• Eliminates the need to log in to Nutanix Support Portal
• Communicates with Nutanix Support Portal to detect
changes to the status of a cluster license
• Disabled by default
• Requires outbound connectivity to portal.nutanix.com: 443
214
Before managing Nutanix licensing M1
215
M1
216
Software and Firmware Upgrades M1
Nutanix provides a mechanism to upgrade
•
• Software
• Add-on features
• NCC
• Firmware
These upgrades are done in a non-intrusive, rolling
•
217
Hypervisor Upgrade Overview and
Requirements
M1
Do one of the following:
•
• Run the Nutanix Cluster Check (NCC) health checks from any
CVM in the cluster
• Download the available hypervisor software from the vendor
and the metadata file (JSON) from the Nutanix Support Portal
• If you are upgrading AHV, you can download the binary bundle
from the Nutanix Support Portal
• Upload the software and metadata through Upgrade Software
218
M1
Gracias!
219