Академический Документы
Профессиональный Документы
Культура Документы
Table of Contents
Lab Overview ................................................................................................................................... 7 How to submit a bug in Hosted Beta ........................................................................................... 7 Virtual SAN .................................................................................................................................. 8 Virtual SAN Overview ...................................................................................................................... 9 What is Virtual SAN ................................................................................................................ 9 Key Components................................................................................................................... 10 Customer Benefits ................................................................................................................ 10 Primary Use Cases ............................................................................................................... 11 Virtual SAN Requirements ............................................................................................................. 12 vCenter Server ...................................................................................................................... 12 vSphere ................................................................................................................................. 12 Disk & Network...................................................................................................................... 12 Module 1 Virtual SAN Setup and Enable ................................................................................... 14 Setup of Virtual SAN Network and Enable Cluster ................................................................... 15 Easy Setup ............................................................................................................................ 15 Setup Virtual SAN Network ................................................................................................... 15 Navigate from the Home to Hosts & Clusters ....................................................................... 16 Navigate to esx-01a.corp.local.............................................................................................. 16 Add Virtual SAN Network ...................................................................................................... 17 Virtual SAN traffic .................................................................................................................. 17 Select Target Device ............................................................................................................. 18 Select Network ...................................................................................................................... 18 Target Device Selected ......................................................................................................... 19 Specify Virtual SAN for Port Group ....................................................................................... 19 Use IPv4 DHCP .................................................................................................................... 20 Ready to complete ................................................................................................................ 20 vmk3 VSAN Network Added ................................................................................................. 21 Enable Virtual SAN ............................................................................................................... 21 Turn ON Virtual SAN ............................................................................................................. 22 Refresh.................................................................................................................................. 22 All hosts participating in Virtual SAN .................................................................................... 23 Create Disk Group for Virtual SAN ....................................................................................... 23 Claim Disks for Virtual SAN Use ........................................................................................... 24 Hosts and Disks Selected ..................................................................................................... 25 Task Begins .......................................................................................................................... 25 Refresh.................................................................................................................................. 26 vsanDatastore ....................................................................................................................... 27 Verify Storage Provider Status.............................................................................................. 28 Select VM Storage Policies ................................................................................................... 29
LAB GUIDE /2
VM Storage Polices in vCenter ............................................................................................. 30 Create my first VM storage policy ......................................................................................... 31 Create a new VM Storage Policy .......................................................................................... 31 Rule-Sets .............................................................................................................................. 31 Create a Rule ........................................................................................................................ 32 Default Virtual Machine Storage Policies .............................................................................. 32 Hosts and Clusters View ....................................................................................................... 33 Default Storage Service Level during Storage vMotion ........................................................ 33 Select Migration Type ........................................................................................................... 34 Move the VM to a Virtual SAN Datastore ............................................................................. 34 Review Selection................................................................................................................... 34 Verification ............................................................................................................................ 35 Physical Disk Placement ....................................................................................................... 36 Storage vMotion from a Virtual SAN datastore ..................................................................... 37 Select Migration Type ........................................................................................................... 37 Move the VM to a non Virtual SAN datastore ....................................................................... 38 Useful Virtual SAN CLI Commands ............................................................................................... 39 Open PuTTY ......................................................................................................................... 39 ssh to esx-01a.corp.local ...................................................................................................... 40 Login to esx-01a.corp.local ................................................................................................... 40 vsan Commands ................................................................................................................... 41 vsan cluster ........................................................................................................................... 41 vsan network ......................................................................................................................... 42 vsan storage.......................................................................................................................... 42 vsan policy ............................................................................................................................ 43 Conclusion ............................................................................................................................ 43 Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability .......................... 44 Build VM Storage Policies ......................................................................................................... 45 Enable Storage Policies ........................................................................................................ 45 Select VM Storage Policies ................................................................................................... 45 Create VM storage policy...................................................................................................... 46 Create a new rule for Tier 2 Apps ......................................................................................... 46 Rule-Sets .............................................................................................................................. 47 Create a Rule on Number of Failures tolerate ...................................................................... 48 How many failures to tolerate? ............................................................................................. 48 Matching Resources ............................................................................................................. 49 Ready to complete ................................................................................................................ 49 Tier 2 Apps Rule Ready ........................................................................................................ 50 vMotion & Storage vMotion ....................................................................................................... 51 Virtual SAN Interoperability ................................................................................................... 51
LAB GUIDE /3
Storage vMotion from NFS to vsanDatastore ....................................................................... 52 Migrate base-sles VM ........................................................................................................... 52 Change datastore ................................................................................................................. 53 Select vsanDatastore ............................................................................................................ 53 Review and Finish ................................................................................................................. 54 Storage vMotion underway ................................................................................................... 54 Review the new destination .................................................................................................. 55 vMotion from host with local storage to host without local storage ....................................... 55 Change Host ......................................................................................................................... 56 Allow Host Selection ............................................................................................................. 56 Select esx-04 Host ................................................................................................................ 57 Migrate VM back to esx-01a ................................................................................................. 58 vSphere HA and Virtual SAN Interoperability ........................................................................... 59 vSphere HA & Virtual SAN Interoperability ........................................................................... 59 Enable HA on the cluster ...................................................................................................... 60 Turn ON vSphere HA ............................................................................................................ 60 HA Enabled ........................................................................................................................... 61 Host Failure No running VMs ................................................................................................ 62 Reboot esx-02a ..................................................................................................................... 63 esx-02 Host Failure ............................................................................................................... 63 Other hosts in Virtual SAN cluster status .............................................................................. 64 Check base-sles VM Home .................................................................................................. 64 Check base-sles Hard Disk 1................................................................................................ 65 Host Failure with Running VMs ............................................................................................. 65 Start VM ................................................................................................................................ 66 Identify Host .......................................................................................................................... 66 VM Storage Policies .............................................................................................................. 67 vMotion to esx-03a ................................................................................................................ 67 Select esx-03 as the host ...................................................................................................... 68 Confirm esx-03a.corp.local ................................................................................................... 69 Reboot esx-03a ..................................................................................................................... 69 Host Status ............................................................................................................................ 70 base-sles Status.................................................................................................................... 71 Refresh.................................................................................................................................. 72 base-sles has restarted on another host .............................................................................. 73 Quorum to run the VM .......................................................................................................... 73 Conclusion ............................................................................................................................ 73 Module 3 Virtual SAN Storage Level Agility ............................................................................... 74 Setting up our environment ................................................................................................... 75 Enter into Maintenance Mode ................................................................................................... 76
LAB GUIDE /4
Moving a vSphere Host out of the cluster ............................................................................. 77 Defining your VM Storage Policies............................................................................................ 78 Decisions when creating a VM Storage Policy ..................................................................... 78 Storage Policies .................................................................................................................... 78 Creating a VM Storage Policy (1).............................................................................................. 79 Creating a VM Storage Policy (2) ......................................................................................... 79 Creating a VM Storage Policy (3) ......................................................................................... 80 Creating a VM Storage Policy (4) ......................................................................................... 80 Creating a VM Storage Policy (5) ......................................................................................... 81 Creating a VM Storage Policy (6) ......................................................................................... 81 Creating a VM Storage Policy (7) ......................................................................................... 82 Create a Virtual Machine and apply VM Storage Policy ....................................................... 82 Create a Virtual Machine and apply VM Storage Policy (2) ................................................. 83 Create a Virtual Machine and apply VM Storage Policy (3) ................................................. 84 Create a Virtual Machine and apply VM Storage Policy (4) ................................................. 85 View Physical Disk Placement of the VM ............................................................................. 86 Understanding the storage requirements of a VM .................................................................... 87 Overview of the Capabilities of a VM Storage Policy............................................................ 87 Understanding VM Storage Policies ..................................................................................... 88 Modify VM Storage Policies (1)............................................................................................. 90 Modify VM Storage Policies (2)............................................................................................. 90 Resync VM with the Policy Change (1) ................................................................................ 91 Resync VM with the Policy Change (2) ................................................................................ 91 View Physical Disk Placement of the VM ............................................................................. 92 Scaling out your Compute and Storage resources ................................................................... 93 Adding a Compute Node ....................................................................................................... 93 Verify vsanDatastore access ................................................................................................ 93 Add a Compute Node with Local Storage (1) ....................................................................... 94 Add a Compute Node with Local Storage (2) ....................................................................... 94 Add a Compute Node with Local Storage (3) ....................................................................... 95 Add a Compute Node with Local Storage (4) ....................................................................... 95 Add a Compute Node with Local Storage (5) ....................................................................... 96 Add a Compute Node with Local Storage (6) ....................................................................... 98 Add a Compute Node with Local Storage (7) ....................................................................... 99 Add a Compute Node with Local Storage (8) ..................................................................... 100 Add a Compute Node with Local Storage (9) ..................................................................... 101 Add a Compute Node with Local Storage (10) ................................................................... 102 Add a Compute Node with Local Storage (11) ................................................................... 102 Verify vsanDatastore Disk Groups ...................................................................................... 104 View vsanDatastore Capacity ............................................................................................. 105
LAB GUIDE /5
Changing VM Storage Policies on the fly ................................................................................ 106 Modify the VM Storage Policy (1) ....................................................................................... 106 Modify the VM Storage Policy (2) ....................................................................................... 106 Resync Virtual Machine with Policy Changes (1) ............................................................... 107 Resync Virtual Machine with Policy Changes (2) ............................................................... 108 Virtual SAN Command Line and Troubleshooting .................................................................. 110 Which interface is Virtual SAN using for communication? .................................................. 110 Which disks have been claimed by Virtual SAN? ............................................................... 111 Get Cluster details .............................................................................................................. 112 Conclusion .......................................................................................................................... 112 Virtual SAN Summary ......................................................................................................... 113
LAB GUIDE /6
Lab Overview
How to submit a bug in Hosted Beta
We want to make the best use of your time while getting valuable feedback from you on your experience of using our products. As you progress through the Hosted Beta lab, well record all of your lab activity. When youre ready to give us some feedback, use the vSubABug tool, to tell us what you think, and when you click submit, well grab the last few minutes of your lab activity and all the relevant logs so our engineers can get some great context on what you did in the lead up to that feedback. Double click on the vSubABug icon on your VMware View Desktop control center.
You will be prompted to enter your email or station number. Enter your email and click ok. Describe the Bug. Click Add for each one and then Click on Submit TheBugs!
LAB GUIDE /7
Virtual SAN
This lab is focused on a new storage feature in vSphere; Virtual SAN (VSAN). The lab is broken up into three modules. Each Module depends on the next, so it is preferred that they be taken in order. The first three Modules will take about 120 minutes to complete. Please be aware of some reboot time needed at the end of Module 2. If you plan to continue from Module 2 to 3 factor in a few extra minutes before you can start Module 3. The Modules are: Module 1 Virtual SAN Setup, Enable and Build Storage Policies (60 Minutes) Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability (30 minutes) Module 3 Virtual SAN Storage Level Agility (30 minutes) There are average times next to each module as to how long it will take to complete. Depending on experience your time may be more or less.
LAB GUIDE /8
LAB GUIDE /9
Key Components
Hypervisor-based software-defined storage Aggregates local HDDs to provide a clustered data store for VM consumption Leverages local SSDs as a cache Distributed RAID object-based (Redundant Array of Independent Disks) architecture provides no single point of failure Policy-based VM-storage management for end-to-end SLA enforcement Integrated with vCenter Integrated with vSphere HA, DRS and vMotion Scale-Out Storage: 3-8 nodes
Customer Benefits
VMware recognizes the significant cost of storage in many virtualization projects. Many projects stall, or are canceled due to the fact that to meet the storage requirements of the project, the storage simply becomes too expensive. Using a hybrid approach of SSD for performance and HDD for capacity, Virtual SAN (VSAN) is aimed at re-enabling projects that require a less expensive storage solution. Easy to setup, configure & manage Eliminate performance bottlenecks and single points of failure Lower storage TCO
vCenter Server
Virtual SAN (VSAN) requires a vCenter Server running 5.5. Virtual SAN can be managed by both the Windows version of vCenter Server and the vCenter Server Appliance (VCSA). Virtual SAN is configured and monitored via the vSphere Web Client and this also needs to be version 5.5.
vSphere
Virtual SAN (VSAN) requires at least 3 vSphere hosts (where each host has local storage) in order to form a supported Virtual SAN cluster. This is to allow the cluster to meet the minimum availability requirements of tolerating at least one component failure. The vSphere hosts must be running vSphere version 5.5 at a minimum. With fewer hosts there is a risk to the availability of virtual machines if a component is unavailable. The maximum number of hosts supported is 8 in the initial release of Virtual SAN. Each vSphere host in the cluster that contributes local storage to Virtual SAN must have at least one hard disk drive (HDD) and at least one solid state disk drive (SSD).
A HBA or a Pass-thru RAID Controller is required (a RAID Controller which can present disks directly to the host without a RAID configuration) A combination of HDD & SSD devices are required (a minimum of 1 HDD & 1 SSD [SAS or SATA]), VMware recommends a 1:10 ratio between SSD and HDD capacity. The SSD will provide both a write buffer and a read cache. The more SSD capacity in the host, the greater the performance since more I/O can be cached. Not every node in a Virtual SAN (VSAN) cluster needs to have local storage. Hosts with no local storage can still leverage distributed datastore.
LAB GUIDE /12
Each vSphere host must have at least one network interface card (NIC). The NIC must be at least 1 GB capable. However, as a best practice, VMware is recommending 10 GB network interface cards. With a Distributed Switch, NIOC can also be enabled to dedicate bandwidth to the Virtual SAN network A Distributed Switch can be optionally configured between all hosts in the Virtual SAN cluster, although VMware Standard Switches (VSS) will also work. A Virtual SAN VMkernel port must be configured for each host..
The VMkernel port is labeled Virtual SAN. This port is used for inter--cluster node communication and also for read and writes when one of the vSphere hosts in the cluster owns a particular virtual machine, but the actual data blocks making up the virtual machine files are located on a different vSphere host in the cluster. In this case, I/O will need to traverse the network configured between the hosts in the cluster.
Begin by launching Firefox browser and login to vSphere Web Client. We are using the Windows based vCenter so your login will be Administrator. FYI Virtual SAN (VSAN) is supported on both Windows and the appliance versions of vCenter server. User name: Administrator Password: VMware1!
LAB GUIDE /15
1 2 3
1 2
4 3
With esx-01a.corp.local selected navigate to Manage -> Networking -> VMkernel adapters We must now add a VMkernel adapter for the Virtual SAN traffic. Click the icon to add a new adapter (4) Virtual SAN traffic
We have already attached each host to a distributed switch and created a Virtual SAN port group. You must select the port group to use for this host. Click 'Browse'. Select Network
After VSAN Network is selected your screen should look like the above. Then click Next Specify Virtual SAN for Port Group
Keep the default settings but select Virtual SAN traffic. Click Next
You should now see vmk3 added for the VSAN Network. A VMkernel adapter for Virtual SAN Traffic must be added to each host in the cluster. We have already repeated the above steps for esx-02a, esx-03a, and esx-04a for you. Feel free to click on each host to see the VSAN VMkernel adapter. If you don't add this to each host a "mis-configuration warning" will appear on the Virtual SAN General tab.
Once our network adapters are in place we can turn on Virtual SAN at the Cluster level. Select Cluster Site A then navigate to Manage > Settings > Virtual SAN > General > Edit
Check "Turn ON Virtual SAN", click OK. We are going to keep "Manual" selected for this lab. What this means is that we must manually add disks. The Automatic option will add all empty disks on the hosts to be claimed by Virtual SAN. IGNORE the license warning "You must assign a license key to the cluster before the evaluation period of Virtual SAN expires". A valid license has been applied for you. Refresh
After the refresh you should see all 4 hosts in the Virtual SAN cluster, but no disks are yet in use. Create Disk Group for Virtual SAN
2 3 1
From here we will create a new disk group that will use all eligible disks. Select Cluster Site A > Manage > Settings > Virtual SAN > Disk Management > Claim Disks (5)
Click "Select all eligible disks" In this lab we will claim all free disks that meet the Virtual SAN (VSAN) rules. Note the rules that must apply for a host & disk to be seen on this page. Each disk group must contain at least one SSD. The SSD is used for write cache/read buffer and the HDDs are used for data disks/capacity. VMware recommends, as a best practice, a 1:10 ratio between SSD and HDD capacity.
All hosts and disks should now be selected. Click "OK". Note you can select any combination of eligible hosts and disks to meet your requirements. In this lab we will take all unclaimed disks. Task Begins
Recent Tasks will show work underway. Due to the amount of hosts and disks selected this process will take about 2 minutes to complete.
Refresh
After 2 minutes Refresh the web client to show the new disk group created. Congratulations, your Virtual SAN is enabled with multiple valid disk groups.
vsanDatastore
1 3 4 5
A vsanDatastore has also been created. To see the capacity navigate to Datastores > Manage > Settings > vsanDatastore > General Ignore the ds-site-nfs01 (inactive) message. This is a result of the lab environment and you will find this datastore active. The capacity shown is an aggregate of the HDDs taken from each of the vSphere hosts in the cluster (less some vsanDatastore overhead). The SSD volumes are not considered when the capacity calculation is made.
2 1 3
For each vSphere host to be aware of the capabilities of Virtual SAN and to communicate between vCenter and the storage layer a Storage Provider is created. Each vSphere host has a storage provider once the Virtual SAN cluster is formed. The storage providers will be registered automatically with SMS (Storage Management Service) by vCenter. However, it is best to verify that the storage providers on one of the vSphere hosts has successfully registered and is active, and that the other storage providers from the remaining vSphere hosts in the cluster are registered and are in standby mode. Navigate to the vCenter server > Manage > Storage Providers to check the status. In this four-node cluster, one of the Virtual SAN providers is online and active, while the other three are in Standby. Each vSphere host participating in the Virtual SAN cluster will have a provider, but only one needs to be active to provide Virtual SAN datastore capability information. Should the active provider fail for some reason one of the standby storage providers will take over.
VM Storage Policies are similar in some respects to the vSphere 5.0 & 5.1 Profile Driven Storage feature. VM Storage Policies are enabled on vSphere 5.5 when you enable a Virtual SAN Cluster To begin return to Home screen Select VM Storage Policies
Click Enable VM Storage Policies per compute resource icon with the check mark
You will notice that VM Storage Policies is enabled on the Virtual SAN Cluster Cluster Site A automatically, as a VSAN Cluster, when enabled, turns on VM Storage Policies automatically. This is proved, as host esx-05a.corp.local has a status of Unknown Your screen should look like the above, you can Close the window The capabilities of the vsanDatastore should now be visible during VM Storage Policy creation. By using a subset of the capabilities a vSphere admin will be able to create a storage policy for their VM to guarantee Quality of Service (QoS).
LAB GUIDE /30
You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy Create a new VM Storage Policy
In this example we walk through create a new storage policy rule for a print server In the Name field enter Print Server, Click Next to continue Rule-Sets
Spend a moment reading this page to learn about rule-sets. Click Next when ready
LAB GUIDE /31
Create a Rule
1 2
Select VSAN from the capabilities list (1) Then click "Add capability..." (2) to view the Capabilities available Click Cancel to exit the wizard In Module 3 you will take a deeper dive on rule capabilities. When you enable Virtual SAN a VM Storage Policy with the following capabilities will be created by default Number of failures to tolerate = 1 and Force Provisioning = 1 Default Virtual Machine Storage Policies We will verify this behavior when doing a Storage vMotion
Click on the Home icon in the vSphere Client. Then click Hosts and Clusters on the Main Screen.
Select Cluster Site A Navigate to base-sles Right Click on base-sles and select the Migrate Option
LAB GUIDE /33
Select the Change Datastore option and click Next Move the VM to a Virtual SAN Datastore
Select None from the VM Storage Policy dropdown Pick the vsanDatastore from the Compatible datastores and click Next Review Selection
Review the changes being made and click 'Finish' to migrate the VM.
Verification
The Storage vMotion will take a few minutes to complete. In the Summary page of the base-sles VM you will notice the Storage Policy is blank and that the VM base-sles resides on the vsanDatastore. Now lets verify if the default VM Storage Policy is applied to the objects belonging to the VM base-sles on datastore vsanDatastore
As a final step, you might be interested in seeing how your virtual machines objects have been placed on the vsanDatastore. To view the placement, select base-sles > Manage > VM Storage Policies > Hard disk 1. The Physical Disk Placement will show you on which host the components of your objects reside. The RAID 1 indicates that the VMDK has a replica. By default any VM deployed to the vsanDatastore is mirrored for availability as the default Virtual SAN VM Storage Policy is enforced.
Storage vMotion from a Virtual SAN datastore In preparation for Module 2, we will Storage VMotion the Virtual Machine base-sles back to the NFS datastore
Select Cluster Site A Navigate to base-sles Right Click on base-sles and select the Migrate Option
Keep the default virtual disk format and VM Storage Policy Keep existing VM storage polices Pick ds-site-a-nfs01 from the list of datastores and click Next
Open PuTTY
ssh to esx-01a.corp.local
Double-click on esx-01a.
Login to esx-01a.corp.local
Using username "root". Using keyboard-interactive authentication. Password: The time and date of this login have been sent to the system logs. VMware offers supported, powerful system administration tools. Please see www.vmware.com/go/sysadmintools for details. The ESXi Shell can be disabled by an administrative user. See the vSphere Security documentation for more information. ~#
vsan Commands
~ # esxcli vsan Usage: esxcli vsan {cmd} [cmd options] Available Namespaces: datastore network storage cluster maintenancemode policy trace
Commands for VSAN datastore configuration Commands for VSAN host network configuration Commands for VSAN physical storage configuration Commands for VSAN host cluster configuration Commands for VSAN maintenance mode operation Commands for VSAN storage policy configuration Commands for VSAN trace configuration
By typing: esxcli vsan This will give you a list of all the possible esxcli commands related to vsan, with a brief description for each.
vsan cluster
~ # esxcli vsan cluster get Cluster Information Enabled: true Current Local Time: 2013-09-18T09:55:40Z Local Node UUID: 5228df36-776b-505a-35cd-005056808f33 Local Node State: AGENT Local Node Health State: HEALTHY Sub-Cluster Master UUID: 52290240-9add-3201-0a17-00505680ff72 Sub-Cluster Backup UUID: 5228efe9-3da8-ff3b-44d7-0050568033b1 Sub-Cluster UUID: 52d1c8ca-c7b4-8853-d6f4-159265c9554e Sub-Cluster Membership Entry Revision: 8 Sub-Cluster Member UUIDs: 52290240-9add-3201-0a17-00505680ff72, 5228efe9-3da8ff3b-44d7-0050568033b1, 5228df36-776b-505a-35cd-005056808f33, 5228eece-e9ba-0af28616-005056809b63, 5228f336-8733-e2d9-0ea5-00505680d045 Sub-Cluster Membership UUID: fb582f52-71e8-f226-b5a7-00505680ff72
To view details about the Virtual SAN Cluster, like its Health or whether it is a Master or Backup Node, you can type the following: esxcli vsan cluster get
vsan network
~ # esxcli vsan network list Interface VmkNic Name: vmk3 IP Protocol: IPv4 Interface UUID: e5072952-1cc0-ee9c-b96f-005056808f33 Agent Group Multicast Address: 224.2.3.4 Agent Group Multicast Port: 23451 Master Group Multicast Address: 224.1.2.3 Master Group Multicast Port: 12345 Multicast TTL: 5
To view networking details, you can execute this command: esxcli vsan network list
vsan storage
~ # esxcli vsan storage list mpx.vmhba2:C0:T1:L0 Device: mpx.vmhba2:C0:T1:L0 Display Name: mpx.vmhba2:C1:T0:L0 Is SSD: false VSAN UUID: 523c0dc6-9744-c275-ef38-f195d5c22682 VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C1:T0:L0 Used by this host: true In CMMDS: true Checksum: 14554848699992102318 Checksum OK: true mpx.vmhba2:C0:T0:L0 Device: mpx.vmhba2:C0:T0:L0 Display Name: mpx.vmhba2:C0:T0:L0 Is SSD: true VSAN UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba2:C0:T0:L0 Used by this host: true In CMMDS: true Checksum: 654352745454525052 Checksum OK: true
To view the details on the physical storage devices on this host that are part of the Virtual SAN, you can use this command: esxcli vsan storage list
vsan policy
~ # esxcli vsan policy getdefault Policy Class Policy Value ------------------------------------------------------------------cluster (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vdisk (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vmnamespace (("hostFailuresToTolerate" i1) ("forceProvisioning" i1)) vmswap (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
To view the Policies in effect, such as how many failures the VSAN can tolerate, the command can be executed: esxcli vsan policy getdefault
Conclusion
This concludes Module 1 Virtual SAN Setup and Enable
VM Storage Policies are similar in some respects to the vSphere 5.0 & 5.1 Profile Driven Storage feature.
You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy Create a new rule for Tier 2 Apps
In this example we will create a new storage policy rule for our Tier 2 Apps. In the Name field enter Tier 2 Apps, Description: Storage Policy for Tier 2 Apps Click Next to continue
Rule-Sets
Rule-sets are a way of using storage from different vendors, for example you can have single bronze policy with one Virtual SAN Rule-Set and one 3rd party storage vendor Rule-Set. When Tier 2 Apps is chosen as the storage service level at VM deployment time both Virtual SAN and the 3rd party storage will match the requirements in the policy. We have already briefly looked at rule-sets in Module 1. Click Next when ready
I want the VMs which have this policy associated with them to tolerate at least one component failure (host, network or disk). Select VSAN from the capabilities list (1) Select Number of failures to tolerate (2) How many failures to tolerate?
Matching Resources
And the nice thing about this is immediately I can tell whether or not any datastores are capable of understanding the requirement in the matching resources window. As you can see vsanDatastore is capable of understanding these requirements that I have placed in the VM Storage Policy. Note that there is no guarantee that the datastore can meet the requirements in the VM Storage Policy. It simply means that the requirements in the storage policy can be understood by the datastores which show up in the matching resources. Click Next. Ready to complete
This is where we start to define the requirements for our VMs and the applications running in the VMs. Now we simply tell the storage layer what our requirements are by selecting the appropriate VM Storage Policy during VM deployment and the storage layer takes care of deploying the VM in such a way that it meets those requirements.
Supported
VM Snapshots vSphere HA vSphere DRS vMotion Storage vMotion SRM/VR VDP/VDPA
Not Applicable
SIOC Storage DRS Fault Tolerance (FT) vSphere Flash Read Cache
Futures
Horizon View vCloud Director > 2TB VMDKs
Virtual SAN is fully integrated with many of VMware's storage and availability features. In this module we will turn on HA and use vMotion, but you will notice that many other availability features are supported. SIOC is not applicable because Virtual SAN takes the performance requirements from policy settings. Storage DRS is not applicable because Virtual SAN (VSAN) is a single datastore. DPM may include hosts in a VSAN cluster so we don't want to power-off hosts that may impact the storage policy.
Navigate to Hosts & Clusters > base-sles > Summary Migrate base-sles VM
Change datastore
In the VM Storage Policy drop down select Tier 2 Apps Based on the storage policy the disk format and destination datastore will be selected. Click Next
Notice in the Summary screen the Storage Policy is now compliant and applied against the vsanDatastore. This demonstrates that you can migrate from traditional datastore formats such as NFS & VMFS to the new vsanDatastore format.
vMotion from host with local storage to host without local storage
Now let's take a look at hosts which are in the Virtual SAN cluster and do not have any local storage. These hosts can still use the vsanDatastore to host VMs. At this point, the virtual machine base-sles resides on the vsanDatastore. The VM is currently on a host that contributes local storage to the vsanDatastore (esx01a.corp.local). We will now move this to a host (esx-04a.corp.local) that does not have any local storage. Once again select the base-sles virtual machine from the inventory. From the Actions drop down menu, once again select Migrate. This time we choose the option to Change host. Change Host
Select Cluster Site A (1) Check "Allow host selection within this Cluster" at bottom of screen. Click Next Select esx-04 Host
Select esx-04a.corp.local. Click Next, then Finish When the migration has completed, you will see how hosts that do not contribute any local storage to the vsanDatastore can still run virtual machines. This means that Virtual SAN can be scaled out on a compute only basis.
To complete this chapter migrate the VM back to esx-01a.corp.local which has local storage making up the Virtual SAN datastore, Follow the steps above and then click Finish
First, lets examine the object layout of the base-sles virtual machine. Select the base-sles > Manage > VM Storage Policies > VM Home This storage object has 3 components, two of which are replicas making up a RAID-1 mirror. The third is a witness disk that is used for tie breaking. The next object is the disk, which you looked at in Module 1. Just to recap, this has a "Number of Disk Stripes Per Object" set to 2; therefore there is a RAID-0 stripe component across two disks. To mirror an object with a striped width of 2, 4 disks are required. Again, since Number of Failures to Tolerate is set to 1 there is also a RAID-1 configuration to replicate the stripe. So we have two RAID-0 (stripe) configurations, and a RAID-1 to mirror the stripes. The witnesses are once again used for tie-breaking functionality in the event of failures. The next step is to invoke some failures in the cluster to see how this impacts the components that make up our virtual machine storage objects, but also how Virtual SAN & vSphere HA interoperate to enable availability.
Navigate to the Cluster Site A > Manage > Settings > vSphere HA > Edit Turn ON vSphere HA
Check the box to Turn ON vSphere HA. Click OK. By default, the vSphere HA Admission Controls have been set to tolerate a single host failure. You can examine this if you wish by opening the Admission Control settings to verify.
HA Enabled
Select Cluster Site A > Summary After enabling HA you will see a warning about insufficient resources to satisfy vSphere HA failover level. This is a transient warning and will eventually go away after a few moments, once the HA cluster has finished configuring. You can try refreshing to remove it. The cluster summary tab will show a vSphere HA overview (3)
In this first failure scenario we will take one of the hosts out of the cluster. This host does not have any running VMs but we will use it to examine how the Virtual SAN (VSAN) replicas provide continuous availability for the VM and how the Admission Control setting in vSphere HA and the Number of Failures to Tolerate are met. Select esx-02a.corp.local > Reboot
Reboot esx-02a
Navigate to esx-02a.corp.local > Summary In a short time we see warnings and errors related to the fact that vCenter can no longer reach the HA Agent and then we see errors related to host connection and power status.
If we check on other hosts in the cluster, we see VSAN communication warnings. Navigate to esx-01a.corp.local > Summary Check base-sles VM Home
2 3
Navigate to base-sles > Manage > VM Storage Policies > VM Home (4)
LAB GUIDE /64
With one host out of the cluster, object components that were held on that host are displayed as Absent and Object not found. Check base-sles Hard Disk 1
For base-sles take a look at the Hard disk 1 Basically any components on the rebooted host show up as Absent. When the host rejoins the cluster all components are resynchronized and put back in the Active state when this completes. A bitmap of blocks that have changed between replicas is maintained and this is referenced to resynchronize the components. Now we can see one part of Virtual SAN availability and how virtual machines continue to run even if components go absent.
Start VM
2
Once started make a note of the host it's running on. In this case it's esx-04a.corp.local. If you completed the migration step earlier it may show esx-01a.
VM Storage Policies
2 3
1 4
Navigate to base-sles > Manage > VM Storage Policies > Hard disk 1 We can see which host is acting as Witness and which provides the RAID 1 components. Just for fun, we will vMotion the VM to a RAID 1 Component host and halt that host. vMotion to esx-03a
If the VM is already running on esx-03a.corp.local (a host which also has a RAID 1 component) you can skip this step, otherwise Migrate to esx-03a.corp.local Right Click base-sles > Migrate Select esx-03 as the host
Complete the Wizard to Migrate the VM to esx-03a.corp.local. Remember to check "Allow host selection within this cluster" on Step 2
Confirm esx-03a.corp.local
2
Navigate back to the Summary screen to confirm that esx-03a.corp.local is the host. Reboot esx-03a
Host Status
base-sles Status
2 3
1 4
Navigate to base-sles > Manage > VM Storage Policies > Hard disk 1 Depending on how quick you are navigating in the browser you may notice that base-sles is disconnected and the Component RAID 1 disk is Absent. Why is the Raid 1 component is termed Absent? In this particular failure scenario, i.e. host failure, Virtual SAN will identify which objects are out of compliance and starts a timer with a timeout period of 60 minutes. If the component, in this case a mirror, comes back within 60 minutes, any differences will be synchronized, and the object, in this case the VM home or Hard disk, will return to compliance. If the component does not return within 60 minutes, Virtual SAN will create a new mirror copy.
Refresh
1
3 4
2 5
Refresh and you will soon see a change to the RAID 1 Components and that base-sles is now available. ` Alarms will also be generated and a Warning found on the Summary page of each host (1) The vSphere HA agent on this host cannot reach some of the management network addresses of other hosts....Host cannot communicate with all other nodes in the VSAN enabled cluster
You will notice that HA has kicked in and restarted base-sles on another host, esx04a.corp.local Navigate to base-sles > Summary Quorum to run the VM
1 4 2 3
Finally navigate to Datastores > vsanDatastore > Manage > Settings Notice that the halted host is no longer responding under Disk Groups. If the failure persists for longer than 60 minutes the components will be rebuilt on the remaining disks in the cluster.
Conclusion
This concludes Module 2 Virtual SAN with vMotion, Storage vMotion and HA Interoperability
From the Main screen, select the Home tab, and then click Hosts and Clusters We will carry out the following steps to prepare our Lab environment for additional exercises that we will carry out later. As the cluster Cluster Site A has DRS set to Partially Automated due to lab requirements, we will have to migrate any VMs manually to ensure we can enter into maintenance mode. NOTE however vSphere DRS is fully supported with Virtual SAN (VSAN) If a VM is already running on esx-04a.corp.local you can skip this step, otherwise Migrate to esx-03a.corp.local Right Click your VM, in this case, base-sles > Migrate Complete the Wizard to Migrate the VM to esx-03a.corp.local. Remember to check "Allow host selection within this cluster"
1 2
1. Put the vSphere host called esx-04a.corp.local into Maintenance Mode. Right click the vSphere Host called esx-04a.corp.local and select the option Enter Maintenance Mode. You may have to de-select "Move powered-off and suspended virtual machines to other hosts in the cluster"
Notice the message pertaining maintenance mode request, as the host esx-04a.corp.local is a part of Virtual SAN enabled cluster. Click OK
Since the host esx-04a.corp.local is part of a DRS cluster, you will see a warning popup. 3. Click OK Moving a vSphere Host out of the cluster
1. Move the vSphere host called esx04-a.corp.local out of cluster. We can use a Drag and Drop operation for this. 2. Select the vSphere host called esx-04a.corp.local and drag it over the Datacenter Site A.
Storage Policies
To begin from the Home screen, select VM Storage Policies By default, when you enable Virtual SAN (VSAN) on a Cluster, VM Storage Policies are automatically enabled. By using a subset of the capabilities, a vSphere admin will be able to create a storage policy for their VM to guarantee Quality of Service (QoS).
You should be back at VM Storage Policies. Click the icon with the plus sign to create a new storage policy This icon represents Create New VM Storage Policy Creating a VM Storage Policy (2)
Give the VM Storage Policy a name. Enter VDI-Desktops as the Name, and enter "VM Storage Policy for VDI Desktops" for the description Click Next to continue
Next we get a description of rule sets. Rule-sets are a way of using storage from different vendors, for example you can have single bronze policy with one VSAN Rule-Set and one 3rd party storage vendor RuleSet. When bronze is chosen as the storage service level at VM deployment time, both VSAN and the 3rd party storage will match the requirements in the policy. Spend a moment reading this page to learn more about rule-sets. Click Next when ready
The next step is to select a subset of the vendor-specific capabilities. To begin you need to select the vendor, in this case it is called VSAN. Select Number of failures to tolerate Creating a VM Storage Policy (5)
The next step is to add the capabilities required for the virtual machines that you wish to deploy in your environment. In this particular example, I wish to specify an availability requirement. In this case, I want the VMs which have this policy associated with them to be tolerant of at least one component failure (host, network or disk). Click "Next" Creating a VM Storage Policy (6)
And the nice thing about this is immediately I can tell whether or not any datastores are capable of understanding the requirement in the matching resources window. As you can see, my vsanDatastore is capable of understanding these requirements that I have placed in the VM Storage Policy: Note that this is no guarantee that the datastore can meet the requirements in the storage service level.
LAB GUIDE /81
It simply means that the requirements in the VM Storage Policy can be understood by the datastores which show up in the matching resources. This is where we start to define the requirements for our VMs and the applications running in the VMs. Now, we simply tell the storage layer what the requirements are by selecting the appropriate VM Storage Policy during VM deployment, and the storage layer takes care of deploying the VM in such a way that it meets those requirements. Click Next Click Finish once you have reviewed the rules
Complete the creation of the VM Storage Policy. This new policy should now appear in the list of VM Storage Policies. Create a Virtual Machine and apply VM Storage Policy
Create a virtual machine, which uses the VDI-Desktops profile created earlier. Right click on a vSphere host in the Cluster and select New Virtual Machine Give the VM a name e.g. Windows 2008
LAB GUIDE /82
When it comes to selecting storage, you can now specify a VM Storage Policy (in this case VDI-Desktops). This will show that vsanDatastore is Compatible as a storage device, meaning once again that it understands the requirements placed in the storage policy. It does not mean that the vsanDatastore will implicitly be able to accommodate the requirements just that it understands them. This is an important point to understand about Virtual SAN (VSAN).
Continue with the creation of this Virtual Machine, selecting the defaults for the remaining steps, including compatibility with vSphere 5.5 and later and Windows 2008 R2 (64-bit) as the Guest OS. When you get to 2f. Customize hardware step, in the Virtual Hardware tab, expand the New Hard Disk virtual hardware and you will see storage service level set to VDIDesktops. Reduce the Memory to 512 MB. Reduce the Hard Disk Size to 1GB in order for it to be replicated across hosts (the default size is 40GB we want to reduce this as this is a small lab environment, but needless to say it will work just fine in a physical environment) Click Next and click Finish
Complete the wizard. When the VM is created, look at its Summary tab and check the compliance state in the VM Storage Policies window. It should say Compliant with a green check mark.
2 3
As a final step, you might be interested in seeing how your virtual machines objects have been placed on the vsanDatastore. To view the placement, select your Virtual Machine > Manage > VM Storage Policies. If you select one of your objects, the Physical Disk Placement will show you on which host the components of your objects reside, as shown in the example. The RAID 1 indicates that the VMDK has a replica. This is to tolerate a failure, the value that was set to 1 in the policy. So we can continue to run if there is a single failure in the cluster. The witness is there to act as a tiebreaker. If one host fails, and one component is lost, then this witness allows a quorum of storage objects to still reside in the cluster. Notice that all three components are on different hosts for this exact reason. At this point, we have successfully deployed a virtual machine with a level of availability that can be used as the base image for our VDI desktops. Examining the layout of the object above, we can see that a RAID1 configuration has been put in place by Virtual SAN placing each replica on different hosts. This means that in the event of a host, disk or network failure on one of the hosts, the virtual machine will still be available.
LAB GUIDE /86
When you use Virtual SAN, you can define virtual machine storage requirements, such as performance and availability, in the form of a policy. The policy requirements are then pushed down to the Virtual SAN layer when a virtual machine is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements. When you enable Virtual SAN on a host cluster, a single Virtual SAN datastore is created. In addition, enabling Virtual SAN configures and registers the Virtual SAN storage provider that uses VASA to communicate a set of the datastore capabilities to vCenter Server. When you know storage requirements of your virtual machines, you can create a storage policy referencing capabilities that the datastore advertises. You can create several policies to capture different types or classes of requirements.
Flash Read Cache Reservation: Flash capacity reserved as read cache for the storage object. Specified as a percentage of the logical size of the object. To be used only for addressing read performance issues. Reserved flash capacity cannot be used for other objects. Unreserved flash is shared fairly among all objects. Default value: 0%, Maximum value: 100% Cache Reservation is specified as a percentage of the logical size of the storage object (i.e. VMDK). This is specified as a percentage value (%) with up to 4 decimal places. This fine granular unit size is needed so that administrators can express sub 1% units. Take the example of a 1TB disk. If we limited the read cache reservation to 1% increments, this would mean cache reservations in increments of 10GB, which in most cases is far too much for a single virtual machine. Note: You do not have to set a reservation in order to get cache. All virtual machines equally share the read cache of an SSD. The reservation should be left at 0 (default) unless you are trying to solve a real performance problem and you believe dedicating read cache is the solution. In the initial version of Virtual SAN, there is no proportional share mechanism for this resource.
Object Space Reservation: Percentage of the logical size of the storage object that will be reserved (thick provisioned) upon VM provisioning. The rest of the storage object is thin provisioned. Default value: 0%, Maximum value: 100% All objects deployed on VSAN are thinly provisioned. The Object Space Reservation is the amount of space to reserve specified as a percentage (%) of the total object address space. This is a property used for specifying a thick provisioned storage object. If Object Space Reservation is set to 100%, all of the storage capacity requirements of the VM are offered up front (thick). Force Provisioning: If this option is "Yes", the object will be provisioned even if the policy specified in the storage policy is not satisfiable with the resources currently available in the cluster. VSAN will try to bring the object into compliance if and when resources become available. Default value: No. If this parameter is set to a Yes value, the object will be provisioned even if the policy specified in the VM Storage Policy is not satisfied by the datastore. The virtual machine will be shown as non-compliant in the VM Summary tab, and relevant VM Storage Policy views in the UI. When additional resources become available in the cluster, VSAN will bring this object to a compliant state. However, if there is not enough space in the cluster to satisfy the reservation requirements of at least one replica, the provisioning will fail even if Force Provisioning is turned on.
The first step is to edit the VDI-Desktops profile created earlier and add a stripe width requirement to the policy. Navigate back to Rules & Profiles, select VM Storage Policy, select the VDI-Desktop policy and click on Edit. In the Rule-Set 1, add a new capability called Number of disk stripes per object and set the value to 2. This is the number of disks that the stripe will span. Click OK
You will observe a popup which states that the policy is already in use by a number of Virtual Machines. We will need to synchronize the virtual machine with the policy after saving the changes. You have 2 options: Manually later or Now Select Now and click Yes.
Staying on the VDI-Desktops policy, click on the Monitor tab. In the VMs & Virtual Disks view, you will see that the Compliance Status is Compliant.
1 3
This task may take a little time. We will now re-examine the layout of the storage object to see if the request to create a stripe width of 2 has been implemented. From the VM Storage Polices View, select Virtual Machines > Windows 2008 > VM Storage Policies & select the Hard Disk 1 object. Now we can see that the disk layout has changed significantly. Because we have requested a stripe width of two, the components that make up the stripe are placed in a RAID-0 configuration. And since we still have our failures to tolerate requirement, these RAID-0s must be mirrored by a RAID-1. And because we now have multiple components distributed across the 3 hosts, additional witnesses are needed in case of a host failure.
We previously moved the vSphere host called esx-04.a.corp.local out of the cluster. This vSphere host does not have any local storage, so cannot contribute storage to the vsanDatastore, but can contribute compute resources. Drag the vSphere host back into the Cluster. Take the host out of Maintenance Mode, right click the host and select Exit Maintenance Mode
With the esx-04a.corp.local host selected, select the Related Objects tab and select Datastores. Here you will see that the vSphere host has access to the vsanDatastore, even though it did not contribute storage to the Datastore. Notice our vsanDatastore capacity is still around 118GB (less some vsanDatastore overhead)
We are going to look at the ability to add another vSphere host with storage to the Virtual SAN (VSAN) cluster and observe the scale-out capabilities of the VSAN. At this point, we have four vSphere hosts in the cluster, although only three are contributing local storage to the Virtual SAN datastore. Lets check the status of the vsanDatastore. Navigate to the vsanDatastore > Summary tab. The ~5GB consumed reflect the stripe and replicas for our current VM created earlier.
There is a fifth vSphere host (esx-05a.corp.local) in your inventory that has not yet been added to the cluster. We will do that now and examine how the vsanDatastore seamlessly grows to include this new capacity. Navigate to the cluster object in the inventory, right click and select the action Move hosts into cluster.
From the list of available hosts (you should only see esx-05a.corp.local), select this host. Click OK. Select "Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the host will be deleted" Click OK
Once the vSphere host esx-05a.corp.local has been added to the Cluster, you will notice that an alert that the Host cannot communicate with all the other nodes in the VSAN enabled cluster and VSAN network is not configured
The next step is to add a Virtual SAN network to this host. Create a VSAN VMkernel network adapter to this host using the distributed port group called VSAN Network. Select the esx-05a.corp.local host. Select the Manage tab, and select Networking. Select VMkernel Adapters and click the Add host Networking icon Select VMkernel Network Adapter In the Select an existing distributed port group, select Browse... and pick the port group called VSAN Network In the Enable services section, pick Virtual SAN traffic Leave the IPv4 settings as Obtain IPv4 settings automatically.
Navigate to esx-05a.corp.local, right click, and select All vCenter Actions > Reconfigure for vSphere HA
Once completed the alert will disappear and esx-05a.corp.local will be added successfully to the Virtual SAN enabled cluster
Now that we have the Network and HA configured, lets look at the Disk Management Select Cluster Site A > Manage > Settings > Virtual SAN > Disk Management
Select the Host called esx-05a.corp.local In the Show dropdown, select Ineligible Here we can see that there are 3 Disks Ineligible (3 Non-SSD or Magnetic Disks) to be added to the vsanDatastore. One of the requirements for adding Disks to a Virtual SAN Cluster is that they need to be blank, or have no disk partitions present. These disks have VMFS partitions; we will look at that in a little while. The 2GB disk is actually our vSphere Boot Disk. The disks that we are interested in are the 2 X 10GB disks. e.g. mpx.vmhba1:C0:T1:L0 and mpx.vmhba2:C0:T1:L0 FYI: This lab just enforces the fact that the disks need to be blank or have no partitions on them before adding them to Virtual SAN Cluster. In a production environment, consult with your Storage Admin before removing any partitions from disks. These may be valid VMFS Partitions that are in use by Virtual Machines. Exercise caution when deleting VMFS Partitions.
Lets now look at the Disk Partitions that are on these disks. Run the following command: esxcli storage core device partition list We can use the partedUtil command to interrogate the partitions on the disks.
~# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 gpt 1305 255 63 20971520 2 6144 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0 # # ~# partedUtil getptbl /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0 gpt 1305 255 63 20971520 2 6144 20971486 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Here we can see that we have a VMFS Partition on partition 2 on each of the disks.
Lets now remove these VMFS Partitions so that the Disks will be Eligible for use with Virtual SAN (VSAN). FYI: This lab just enforces the fact that the disks need to be blank or have no partitions on them before adding them to a Virtual SAN Cluster. In a production environment, consult with your Storage Admin before removing any partitions from disks. These may be valid VMFS Partitions that are in use by Virtual Machines. Exercise caution when deleting VMFS Partitions. Run the following commands: In these commands we are deleting partition 2 (the VMFS partition), note there is a space between the Disks MPX reference and the partition number.
partedUtil delete /vmfs/devices/disks/mpx.vmhba1\:C0\:T1\:L0 2 partedUtil delete /vmfs/devices/disks/mpx.vmhba2\:C0\:T1\:L0 2
Back in the vSphere Web Client Select Cluster Site A > Manage > Settings > Disk Management > esx-05a.corp.local In the Show: section, select Not in use Here we can see that we have now 3 disks (1 SSD and 2 Magnetic Disks) available to us to use for Virtual SAN.
Select the SSD disk from the top section and the 2 HDD Disks from the lower section Click OK
Here you will see that the vSphere host called esx-05a.corp.local is now contributing its local storage to the Virtual SAN Cluster. We can see that the Disk Group on this host is made up of one SSD disk with 10GB capacity and 2 x 10 GB HDD's each of 10 GB.
Revisit the vsanDatastore Summary view and check if the size has increased with the addition of the new host & disks. Select the Storage > vsanDatastore > Summary You should observe that the capacity of the vsanDatastore has seamlessly increased from ~118GB to ~138GB with the addition of two x 10GB HDDs. (remember that SSDs do not contribute towards capacity).
Navigate back to Rules & Profiles, select VM Storage Policy, select the VDI-Desktops policy and click on Edit. In the Rule-Set1, add a new capability called Number of disk Stripes per object and set the value to 3. This is the number of disks that the stripe will span. Click OK
You will observe a popup which states that the VM Storage Policy in Use. We will need to synchronize the virtual machine with the policy after saving the changes. Select Manually later and Click Yes
Back in the Inventory tree, select the Virtual Machine that you created earlier e.g. Windows 2008 Select the Manage > VM Storage Policies. Since we changed the VM Storage Policy capabilities, you will notice that the Compliance Status is now Out of Date. This means that we need to Reapply the VM Storage Policy to all out of date entities.
Click on the Reapply the VM Storage Policy to all out of date entities icon (3rd from left) to reapply policy to all out of date entities.
Answer Yes to the popup. The compliance state should now change once the updated policy is applied. Now we can see that the disk layout has changed significantly. Because we have requested a stripe width of three, the components that make up the stripe are placed in a RAID-0 configuration. And since we still have our failures to tolerate requirement, these RAID-0s must be mirrored by a RAID-1. And because we now have multiple components distributed across the 5 hosts, additional witnesses are needed in case of a host failure.
Here we can see that VMkernel interface vmk3 is used for Virtual SAN traffic.
mpx.vmhba1:C0:T1:L0 Device: naa.6000c29545c09f34844bdc1ccaf7a7b9 Display Name: mpx.vmhba1:C0:T1:L0 Is SSD: false VSAN UUID: 52fa0fd3-4a0a-0f03-ab62-cc0ccda18410 VSAN Disk Group UUID: 52777487-f70a-0af3-198e-9ffc747ab13b VSAN Disk Group Name: mpx.vmhba1:C0:T1:L0 Used by this host: true In CMMDS: true Checksum: 15060996719604146982 Checksum OK: true
Here we can see some interesting information about the disks that are used by Virtual SAN (VSAN). We can see the Device information, if it is an SSD disk, VSAN Disk Group information and if the disk is in use. Note here that one of the disks is an SSD disk and the other 2 Disks are not.
Here we can get some information about the Virtual SAN Cluster 1. Local Node UUID of the vSphere host that you ran the command from. 2. The Local Node State, this can be Master, Backup or Agent 3. The Node Health State 4. The UUID of the Master and Backup Nodes 5. The number of Members in the Cluster (Sub-Cluster Member UUIDs), in our case, we have 5 Nodes in the Virtual SAN cluster
Conclusion
This concludes Module 3 Virtual SAN Storage Level Agility
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2013 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.