Вы находитесь на странице: 1из 45

The Host Server

VMware ESXi Configuration Guide

October 2019

This guide provides configuration settings and considerations for


Hosts running VMware ESXi with SANsymphony. Basic VMware
storage administration skills are assumed including how to
connect to iSCSI and Fibre Channel target ports and the
discovering, mounting and formatting of disk devices.

Earlier releases of ESXi may have previously used different


settings than the ones listed here. When upgrading an ESXi
Host, if there is a previously-configured setting that is no longer
listed here, leave the original setting’s value as it is. Do not
change it unless we state a new value.

DataCore’s statement about differences between the


information in this document compared to VMware's own
Hardware Compatibility List can be found here:

DataCore Software and VMware's Hardware Compatibility


List (HCL)
https://datacore.custhelp.com/app/answers/detail/a_id/1131

The authority on real-time data


Table of contents

Changes to this document 3


VMware ESXi compatibility lists 4
ESXi operating system versions 4
VMware ESXi Path Selection Policies (PSP) 4
VStorage API for Array Integration (VAAI) support 5
VSphere Metro Storage Clusters (VMSC) 5
VMware VVOL VASA API 2.0 6
VMware Site Recovery Manager (SRM) 6
Qualified vs. Not Qualified vs. Not Supported 7
The DataCore Server's settings 8
DataCore Servers running in Virtual Machines 8
Port roles 8
Multipathing 8
ALUA support 9
Serving Virtual Disks 9
VMware ESXi Path Selection Policies (PSP) 9
The VMware ESXi Host's settings 10
ISCSI connections 10
Advanced settings 10
Enable VM Component Protection (VMCP) 11
VMware Path Selection Policies 12
Round Robin PSP 13
Fixed PSP 15
Most Recently Used PSP 17
Known issues 19
Failover 20
ISCSI connections 21
QLogic network adaptors 22
Server Hardware 22
Unserving Virtual Disks 22
VAAI 23
VMotion 23
VSphere Client and VSphere Web Client 24
VMware Tools 25
Microsoft Clusters in Virtual Machines 25
Appendix A 27
Preferred Server & Preferred Path settings 27
Appendix B 29
Reclaiming storage from Disk Pools 29
Appendix C 32
Moving from Most Recently Used to another PSP 32
Previous Changes 33
Changes to this document
The most recent version of this document is available from here:
https://datacore.custhelp.com/app/answers/detail/a_id/838

Changes since July 2019


Added
Known Issues – Serving and unserving Virtual Disks
Affects ESX 6.5
Virtual machine hangs and issues ESXi host based VSCSI reset
Virtual machines with disk size larger than 256GB in VMFS5 and larger than 2.5TB in VMFS6.
 Host becomes unresponsive, cannot restart specific hung virtual machines.
 Host requires a reboot to restart the virtual machines
See https://kb.vmware.com/s/article/2152008.

Updated
General
This document has been reviewed for SANsymphony 10.0 PSP 9.
No additional settings or configurations are required.

Removed
General
Any information that is specific to ESXi versions 5.0 and 5.1 have been removed as these are
considered ‘End of General Support’. See: https://kb.vmware.com/s/article/2145103

For previous changes made to this document please see page 33

Page | 3 The Host Server – VMware ESXi Configuration Guide


VMware ESXi compatibility lists
ESXi operating system versions
Applies to all versions of SANsymphony 10.x

ESXi WITH ALUA WITHOUT ALUA

5.5 Qualified Not Qualified

6.x Qualified Not Qualified

Notes:

Qualified vs. Not Qualified vs. Not Supported


See page 7 for definitions.

DataCore Server Front-End Port connections


Both Fibre Channel and iSCSI are supported.
iSER (iSCSI Extensions for RDMA) is not supported.

VMware ESXi Path Selection Policies (PSP)


Applies to all versions of SANsymphony 10.x

ESXi MRU FIXED ROUND ROBIN

5.5 Not Tested Tested/Works Tested/Works

6.x Not tested Tested/Works Tested/Works

Notes:

ESXi version 6.x


Fixed and RR PSPs are both listed on VMware's Hardware Compatibility List. MRU is not.

ESXi version 5.5


Only RR PSP is listed on VMware's Hardware Compatibility List. Both Fixed and MRU are not.

Page | 4 The Host Server – VMware ESXi Configuration Guide


The DataCore Server's settings

VStorage API for Array Integration (VAAI) support


Applies to all versions of SANsymphony 10.x

ESXi

5.5 Tested/Works

6.x Tested/Works

Notes:

VAAI-specific commands that are supported by the DataCore Server:


 Atomic Test & Set (ATS)
 Clone Blocks/Full Copy/XCOPY
 Zero Blocks/Write Same
 Block Delete/SCSI UNMAP

VSphere Metro Storage Clusters (VMSC)


Applies to all versions of SANsymphony 10.x

ESXi WITH ALUA WITHOUT ALUA

5.5 Tested/Works Not Tested

6.x Tested/Works Not Tested

Notes:

Virtual Disks used for VMSC


Virtual Disks that have been directly formatted with VMFS5 (or later) are supported.
Virtual Disks that have been upgraded to VMFS5 (or later) from an earlier VMFS version are
not supported.

Page | 5 The Host Server – VMware ESXi Configuration Guide


The DataCore Server's settings

VMware VVOL VASA API 2.0


ESXi 10.0 PSP 3 and earlier 10.0 PSP 4 and later

5.5 Not VVOL/VASA compatible Not VVOL/VASA compatible

6.x Not VVOL/VASA compatible Tested/Works


Notes:
Configuration specific notes
Please refer to 'Getting Started with the DataCore VASA Provider'
https://docs.datacore.com/SSV-WebHelp/Getting_Started_with_VASA_Provider.htm

VMware Site Recovery Manager (SRM)


Applies to all versions of SANsymphony 10.x

ESXi

5.5 Tested/Works

6.0 Tested/Works

6.5 Tested/Works (1)

6.7 Not Qualified

Notes:
Requires the 'DataCore SANsymphony Storage Replication Adapter’ please see the release
notes from https://datacore.custhelp.com/app/downloads

Storage IO Control
Applies to all versions of SANsymphony 10.x

ESXi

6.0 and
N/A
earlier

6.5 Tested/Works

6.7 Tested/Works

Notes:
No additional configuration is required on the DataCore Server.

1
Only supported/works with DataCore’s SANsymphony Storage Replication Adaptor 2.0 or later

Page | 6 The Host Server – VMware ESXi Configuration Guide


The DataCore Server's settings

Qualified vs. Not Qualified vs. Not Supported

Qualified
This combination has been tested by DataCore and all the host-specific settings listed in
this document applied using non-mirrored, mirrored and Dual Virtual Disks.

Not Qualified
This combination has not yet been tested by DataCore using Mirrored or Dual Virtual Disks
types. DataCore cannot guarantee 'high availability' (failover/failback, continued access etc.)
even if the host-specific settings listed in this document are applied. Self-qualification may
be possible please see Technical Support FAQ #1506

Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any
problems that are encountered while using VMware versions that are 'Not Qualified' will still
get root-cause analysis.

Non-mirrored Virtual Disks are always considered 'Qualified' - even for 'Not Qualified'
combinations of VMware/SANsymphony.

Not Supported
This combination has either failed 'high availability' testing by DataCore using Mirrored or
Dual Virtual Disks types; or the operating System's own requirements/limitations (e.g. age,
specific hardware requirements) make it impractical to test. DataCore will not guarantee
'high availability' (failover/failback, continued access etc.) if the host-specific settings listed
in this document are applied. Mirrored or Dual Virtual Disks types are configured at the
users own risk. Self-qualification is not possible.

Mirrored or Dual Virtual Disks types are configured at the users own risk; however, any
problems that are encountered while using VMware versions that are 'Not Supported' will
get best-effort Technical Support (e.g. to get access to Virtual Disks) but no root-cause
analysis will be done.

Non-mirrored Virtual Disks are always considered 'Qualified' – even for 'Not Supported'
combinations of VMware/SANsymphony.

ESXi versions that are End of Support Life, Availability or Distribution


Self-qualification may be possible for versions that are considered ‘Not Qualified’ by
DataCore but only if there is an agreed ‘support contract’ with VMware. Please contact
DataCore Technical Support before attempting any self-qualification of ESXi versions that
are End of Support Life.

For any problems that are encountered while using VMware versions that are EOSL, EOA or
EOD with DataCore Software, only best-effort Technical Support will be performed (e.g. to
get access to Virtual Disks). Root-cause analysis will not be done.

Non-mirrored Virtual Disks are always considered 'Qualified'.

Page | 7 The Host Server – VMware ESXi Configuration Guide


The DataCore Server's settings

The DataCore Server's settings


DataCore Servers running in Virtual Machines
Please see:
Hyperconverged and Virtual SAN Best Practices guide:
https://datacore.custhelp.com/app/answers/detail/a_id/1155

Operating system type


When registering the Host choose the 'VMware ESXi' menu option.

Port roles
Ports that are used to serve Virtual Disks to Hosts should only have the Front End role
checked. While it is technically possible to check additional roles on a Front End port (i.e.
Mirror and Backend), this may cause unexpected results after stopping the SANsymphony
software.

Any port with front-end role (and is serving Virtual Disks to Hosts) also has either the mirror
and/or backend role enabled will remain ‘active’ even when the SANsymphony software is
stopped. There is some slight difference in behavior depending on the version of
SANsymphony installed.

SANsymphony 10.0 PSP 7 and earlier


Any port that has the mirror and/or back-end role checked will remain ‘active’ after the
SANsymphony software has been stopped.

SANsymphony 10.0 PSP 8 and later


Only ports with the back-end role checked will remain ‘active’ after the SANsymphony
software has been stopped.

Front-end ports that are serving Virtual Disks but remain active after the SANsymphony
software has been stopped can cause unexpected results for some Host operating systems
as they continue to try to access Virtual Disks from the ‘active’ port on the now-stopped
DataCore Server. This, in turn, may end up delaying Host fail-over or result in complete loss of
access from the Host’s application/Virtual Machines.

Multipathing
The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual
Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the
Multipathing Support section from the SANsymphony Help: https://docs.datacore.com/SSV-
WebHelp/Hosts.htm

Non-mirrored Virtual Disks and Multipathing


Non-mirrored Virtual Disks can still be served to multiple Hosts and/or multiple Host Ports
from one or more DataCore Server FE Ports if required; in this case the Host can use its own
multipathing software to manage the multiple Host paths to the Single Virtual Disk as if it
was a Mirrored or Dual Virtual Disk.

Page | 8 The Host Server – VMware ESXi Configuration Guide


The DataCore Server's settings

ALUA support
The ALUA support option (Asymmetrical Logical Unit Access) should be enabled if required
and if Multipathing Support has been also been enabled (see above). Please refer to the
Operating system compatibility table on page 4 to see which combinations of VMware ESXi
and SANsymphony support ALUA. More information on Preferred Servers and Preferred
Paths used by the ALUA function can be found on in Appendix A on page 27.

Serving Virtual Disks


For the first time
DataCore recommends that before serving any Virtual Disk for the first time to a Host, that all
DataCore Front-End ports on all DataCore Servers are correctly discovered by the Host first.
Then, from within the SANsymphony Console, the Virtual Disk is marked Online, up to date
and that the storage sources have a host access status of Read/Write.

To more than one Host port


DataCore Virtual Disks always have their own unique Network Address Authority (NAA)
identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports
on the same Host Server and the same Virtual Disk being served to multiple Hosts.

While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating
system to identify a disk device served to it over different paths generally we have found that
it is. And while there is sometimes a convention that all paths by the same disk device should
always using the same LUN 'number' to guarantee consistency for device identification, this
may not be technically true. Always refer to the Host Operating System vendor’s own
documentation for advice on this.

DataCore's Software does, however always try to create mappings between the Host's ports
and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number
where it can. The software will first find the next available (lowest) LUN 'number' for the Host-
DataCore FE mapping combination being applied and will then try to apply that same LUN
number for all other mappings that are being attempted when the Virtual Disk is being
served. If any Host-DataCore FE port combination being requested at that moment is already
using that same LUN number (e.g. if a Host has other Virtual Disks served to it from previous)
then the software will find the next available LUN number and apply that to those specific
Host-DataCore FE mappings only.

VMware ESXi Path Selection Policies (PSP)


For Round Robin (RR) see page 13
For Fixed see page 15
For Most Recently Used (MRU) see page 17
Also See
Video: Configuring ESX Hosts in the DataCore Management Console
https://datacore.custhelp.com/app/answers/detail/a_id/1637
Registering Hosts
https://docs.datacore.com/SSV-WebHelp/Hosts.htm
Changing Virtual Disk Settings - SCSI Standard Inquiry
https://docs.datacore.com/SSV-WebHelp/Changing_Virtual_Disk_Settings.htm

Page | 9 The Host Server – VMware ESXi Configuration Guide


The VMware ESXi Host's settings
ISCSI connections
TCP Port
Open TCP Port 3260 for iSCSI connections to a DataCore Server.

Advanced settings
DiskMaxIOSize
By default, ESXi’s will send I/O requests up to a maximum of 32MB in size. SANsymphony will
split ‘large’ I/O requests into much smaller request sizes and for sequential I/O patterns this
will not, usually, have any noticeable effect on overall latency on the ESXi Host. For other,
more random I/O patterns however, the Host may have to wait for longer for each of its large
I/O requests to complete because all of the now smaller requests must each be completed
on the DataCore Server(s) before SANsymphony can allow the next ‘large’ I/O request to be
sent from the Host and this can significantly increase overall latency between the Host and
the DataCore Server.

Therefore, DataCore strongly recommend that the DiskMaxIOSize setting be reduced so that,
in the case of non-sequential I/O, there is no significant additional wait time for I/O requests
to complete. Using vSphere, select the ESX Host and click on the ‘Configure’ tab. From the
System options, choose ‘Advanced System Settings’:

Change the ‘Disk.DiskMaxIOSize’ setting to 512:

Also see
Tuning ESX/ESXi for better storage performance by modifying the maximum I/O block
size
https://kb.vmware.com/s/article/1003469

Large I/O block size operations show high latency


https://kb.vmware.com/s/article/2036863

Page | 10 The Host Server – VMware ESXi Configuration Guide


The VMware ESXi Host's settings

Enable VM Component Protection (VMCP)


VMware ESXi 6.x only.
Enabling VMCP on your ESXi HA cluster allows the cluster to react to “all paths down” and
“permanent device loss” conditions by restarting VMs which helps to speed up the overall
VMware recovery process should there be any APD/PDL storage event.

For guidelines on setting up VM Component Protection see


https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.avail.doc/GUID-
F01F7EB8-FF9D-45E2-A093-5F56A788D027.html

Page | 11 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies
Overview
Which PSPs are qualified by DataCore Software?
Please refer to the compatibility list on page 4.

Which PSP does DataCore Software recommend?


DataCore does not recommend to use a particular PSP. Choose the PSP that is appropriate
for your configuration.

Which Storage Array Type Plug-in (SATP) should I use?


For Round Robin (RR) see page 13
For Fixed see page 15
For Most Recently Used (MRU) see page 17

Changing the PSP type on an already-served Virtual Disk


If the PSP can use the same SATP as the original PSP setting, then nothing more needs to be
done on the DataCore Server.

If the PSP requires a different SATP than the original PSP setting, then DataCore recommend
that the Virtual Disks are first unserved from the Host, then the old SATP is removed, the new
SATP is applied and finally the Virtual Disks are be re-served and re-discovered by the Host.

Using different PSPs for the same Virtual Disk across multiple Hosts
While technically possible, this is not supported by DataCore as we cannot guarantee the
behavior of the any/all of the VMware ESXi Hosts sharing this Virtual Disk.

Always verify that DataCore Virtual Disks have been set correctly
Either use the vSphere client UI or run the following command in the ESXi console:

esxcli storage nmp device list | grep -A 7 ^naa\.60030d9

This command lists all devices that only use DataCore’s unique NAA identifier - part of a
SANsymphony Virtual Disk’s SCSI Standard Inquiry Data - with an additional 3 lines of output
above and below that should also include the PSP and the SATP.

Also see

Changing a LUN to use a different Path Selection Policy (PSP)


https://kb.vmware.com/s/article/1036189

Changing multipath or ALUA support settings for hosts


https://docs.datacore.com/SSV-WebHelp/Multipath_Support.htm

Moving from Most Recently Used to another PSP on page 32


This contains a step-by-step guide on how to migrate from MRU to one of the other PSP

Page | 12 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Round Robin PSP

Configuring the Host for Round Robin


DataCore recommend using a custom SATP rule.

Creating a custom SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c
tpgs_on -P VMW_PSP_RR -O iops=10

DataCore recommends changing the IOPS value to 10 (from the default of 1000) as this has
been found in testing to improve performance.

Notes

 SATP rules are persistent and will get claimed during the boot process for all existing
Virtual Disks and any new Virtual Disks served later.

 It is possible to apply these additional settings on-the-fly to a running ESXi Host (i.e.
without any IO disruption), by using esxcli. However, this is not the same as
configuring an SATP rule and, if done this way, will have no persistence over the next
reboot. So on the ‘next’ reboot, the rule will get claimed as it had originally been
defined and these on-the-fly made changes will no longer be in place (and need to be
remade manually).

Also see https://kb.vmware.com/s/article/2069356

Verify the SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore

The response should look like something this:

VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_RR iops=10

The example above is taken from VMware ESXi version 6.5

Using a default SATP rule instead of a custom one


The following default SATP rule can be used if preferred:

VMW_SATP_ALUA system tpgs_on Any array with ALUA support

Page | 13 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the DataCore Server for Round Robin


With or without ALUA?

For Round Robin, Hosts must have the ALUA setting enabled in the SANsymphony console.

Which SANsymphony ‘Preferred Server’ setting should I use?

For Round Robin, DataCore recommends configuring Hosts with an explicit Server or using
the 'Auto select' setting.

Using ‘Auto Select’ or a named DataCore Server

When the Host's Preferred Server setting is either ‘Auto Select’ the Host’s paths to the
DataCore Server listed first in the Virtual Disk’s details tab will be set as ‘Active Optimized’. In
the case of a named DataCore Server then that DataCore Server’s paths will be used as the
‘Active Optimized’ server, regardless of its order in the Virtual Disk’s details tab.

All paths from the Host to the other DataCore Server(s) will be set as 'Active Non-optimized'.

Using the ‘All’ setting

This will set all paths on all DataCore Servers to be ‘Active Optimized’ and while this may
seem, at first, ideal it may end up causing worse performance than expected.

When there are significant distances between DataCore Servers and Hosts (e.g. across links
between remote data centers), then sending I/O Round Robin to ‘remote’ DataCore Servers
compared to a Host’s location may cause noticeable delays/latency while the I/O travels over
the remote links and back compared to just sending the I/O to DataCore Servers that are
‘local’ to the Host.

Therefore, testing is advised before using the ‘All’ preferred setting in production to make
sure that the I/O speeds between servers is adequate.

Also see

Changing multipath or ALUA support settings for hosts


https://docs.datacore.com/SSV-WebHelp/Multipath_Support.htm

Preferred Servers and Preferred Paths


https://docs.datacore.com/SSV-WebHelp/port_connections_and_paths.htm

Appendix A – Notes on Preferred Server and Preferred Path settings on page 27


This contains a more detailed explanation when using the ‘All’ setting.

Page | 14 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Fixed PSP

Configuring the Host for Fixed


DataCore recommend using a custom SATP rule.

Creating a custom SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c
tpgs_on -P VMW_PSP_FIXED

Verify the SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule list -s VMW_SATP_ALUA | grep DataCore

The response should look like something this:

VMW_SATP_ALUA DataCore Virtual Disk user tpgs_on VMW_PSP_FIXED

The example above is taken from VMware ESXi version 6.5

Using a default SATP rule instead of a custom one


The following default SATP rule can be used if preferred:

VMW_SATP_ALUA system tpgs_on Any array with ALUA support

Page | 15 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the DataCore Server for Fixed


With or without ALUA?

For Fixed, Hosts must have the ALUA setting enabled in the SANsymphony console.

Which SANsymphony ‘Preferred Server’ setting should I use?

For Fixed, Hosts must use the ‘All’ setting as this will set all paths on all DataCore Servers to
be ‘Active Optimized’ which Fixed always expects when failing to/from a DataCore Server.
Using a different ‘Preferred Server’ setting would not guarantee that one or more of the
DataCore Servers would be ‘Active No- Optimized’ which may cause failover/failback to not
work as expected.

Unlike Round Robin however Hosts will not send IO to all paths of the preferred DataCore
Server when using Fixed and the 'active/preferred' paths are not actually controlled by the
DataCore Server’s ‘Preferred Server’ setting but by the Fixed PSP on the ESXi Host.

Please refer to VMware's own documentation on how to configure 'active' paths when using
Fixed PSP.

Also see

Changing multipath or ALUA support settings for hosts


https://docs.datacore.com/SSV-WebHelp/Multipath_Support.htm

Preferred Servers and Preferred Paths


https://docs.datacore.com/SSV-WebHelp/port_connections_and_paths.htm

Appendix A – Notes on Preferred Server and Preferred Path settings on page 27


This contains a more detailed explanation when using the ‘All’ setting.

Moving from Most Recently Used to another PSP on page 32


This contains a step-by-step guide on how to migrate from MRU to one of the other PSPs

Page | 16 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Most Recently Used PSP

Configuring the Host for Most Recently Used


DataCore recommend using a custom SATP rule.

Creating a custom SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_DEFAULT_AA
-P VMW_PSP_MRU

Verify the SATP rule


Run the following command on the ESXi Host’s console:

esxcli storage nmp satp rule list -s VMW_SATP_DEFAULT_AA | grep DataCore

The response should look like something this

VMW_SATP_DEFAULT_AA DataCore Virtual Disk user VMW_PSP_MRU

The example above is taken from VMware ESXi version 5.5

Using a default SATP rule instead of a custom one


The following default SATP rule can be used if preferred:

VMW_SATP_DEFAULT_AA fc system Fibre Channel Devices

Page | 17 The Host Server – VMware ESXi Configuration Guide


VMware Path Selection Policies

Configuring the DataCore Server for Most Recently Used


With or without ALUA?

For MRU, Hosts must not have the ALUA setting enabled in the SANsymphony console.

Which SANsymphony ‘Preferred Server’ setting should I use?

As there is no ALUA setting enabled the ‘Preferred Server’ setting is ignored. The
'active/preferred' paths are not actually controlled by the DataCore Server’s ‘Preferred Server’
setting but by the Fixed PSP on the ESXi Host.

Please refer to VMware's own documentation on how to configure 'active' paths when using
Fixed PSP.

Also see

Changing multipath or ALUA support settings for hosts


https://docs.datacore.com/SSV-WebHelp/Multipath_Support.htm

Moving from Most Recently Used to on page 32


This contains a step-by-step guide on how to migrate from MRU to one of the other PSP

Page | 18 The Host Server – VMware ESXi Configuration Guide


Known issues

Known issues
The following is intended to make DataCore Software users aware of any issues that affect
performance, access or may give unexpected results under particular conditions when
SANsymphony is used in configurations with VMware ESXi Hosts.

These Known Issues may have been found during DataCore’s own testing but others may
have been reported by our users when a solution was found that was not to do with
DataCore’s own products.

DataCore cannot be held responsible for incorrect information regarding another vendor’s
products and no assumptions should be made that DataCore has any communication with
these other vendors regarding the issues listed here.

We always recommend that the vendor’s should be contacted directly for more information
on anything listed in this section.

For ‘Known issues’ that apply to DataCore Software’s own products, please refer to the
relevant DataCore Software Component’s release notes.

Page | 19 The Host Server – VMware ESXi Configuration Guide


Known issues

Failover
Affects ESX 6.x and 5.5
Sharing the same ‘inter-site’ connection for both Front end and Mirror ports may result
in loss of access to Virtual Disks for ESXi Hosts if a failure occurs on that shared
connection.

Sharing the same physical connection for both FE and MR ports will work as expected as
long as everything is healthy. Any kind of failure-event over this ‘single’ link, may cause both
mirror and front end I/O to fail at the same time and this will result in one or more Virtual
Disks being unexpectedly inaccessible to one or more ESXi Hosts even though there is an
available I/O path to one of the DataCore Servers.

Even though the DataCore Server will issue the correct SCSI notification back to the ESXi
Hosts (i.e. ‘LUN_NOT_AVAILABLE’) to tell it that the path to the Virtual Disk is no longer
available, the ESXi host will ignore this SCSI response and continue to try to access the
virtual disks on a path to be reported by VMware as either 'Permanent Device Loss' (PDL) or
an 'All-Paths-Down' (APD). ESXi will not attempt any 'failover' (HA) or ‘move’ of the VM (Fault
Tolerance) and will lose access to the Virtual Disk.

Because of this ESXi failover limitation, DataCore cannot guarantee failover for a
configuration where ESX Hosts are serving Virtual Disks over physical link(s) where, at the
same time, the DataCore Servers are using these same physical link(s) for Mirror I/O
(between DataCore Servers).

Therefore DataCore recommend that at least two physically separate links are used. One
for Mirror I/O and the other for FE I/O.

Affects ESX 6.7 only


Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-
20180804001 (or later) failover could take in excess of 5 minutes. DataCore are
recommending (as always) to apply the most up-to-date patches to your ESXi operating
system.

See: https://kb.vmware.com/s/article/56535

Affects ESX 6.x only


Storage PDL responses may not trigger path failover in vSphere 6.0.0 and 6.0 Update 1.
A fix is available from VMware.

See https://kb.vmware.com/s/article/2144657.

Page | 20 The Host Server – VMware ESXi Configuration Guide


Known issues

ISCSI connections
Affects ESX 6.x and 5.5
ESXi hosts experience degraded IO performance on iSCSI network when Delayed ACK is
'enabled' on ESXi its software iSCSI initiator.
For more specific information and how to disable the 'Delayed ACK' feature on ESXi Hosts:

See https://kb.vmware.com/s/article/1002598

A reboot of the ESXi Host will be required.

Affects ESX 6.x and 5.5


Applies to SANsymphony 10.0 PSP 6 Update 5 and earlier
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-
end port is not supported (this also includes ESXi 'Port Binding').

The Front End port will only accept the ‘first’ login and a unique iSCSI Session ID (ISID) will be created.
All subsequent connections coming from a different interface (but that shares the same IQN as the
first login) will create an ISID conflict and be rejected by the SANsymphony software. No further iSCSI
logins will be allowed. Also note that if a SCSI event causes a logout of the iSCSI session then another
interface (sharing the same IQN) may be able to login and prevent a previously connected iSCSI
interface from being able to re-connect.

See the following examples of supported and not-supported configuration when using SANsymphony
10.0 PSP 6 Update 5 or earlier:

Example 1 – A supported configuration


An ESX Host has 4 Interfaces each with its own IP address and each with the same IQN:

192.168.1.1 (iqn.esx1)
192.168.2.1 (iqn.esx1)
192.168.1.2 (iqn.esx1)
192.168.2.2 (iqn.esx1)

Two DataCore Servers each have 2 FE ports with their own IP address and own IQN:

192.168.1.101 (iqn.dcs1-1)
192.168.2.101 (iqn.dcs1-2)
192.168.1.102 (iqn.dcs2-1)
192.168.2.102 (iqn.dcs2-2)

Each Interface of the ESX Host connects to a separate FE Port on each DataCore Server:

(continued on next page)


(iqn.esx1) 192.168.1.1 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.1 ← ISCSI Fabric 2 → 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ← ISCSI Fabric 1 → 192.168.1.102 (iqn.dcs2-1)
(iqn.esx1) 192.168.2.2 ← ISCSI Fabric 2 → 192.168.2.102 (iqn.dcs2-2)

Page | 21 The Host Server – VMware ESXi Configuration Guide


Known issues

This type of configuration is very easy to manage, especially if there are any connection problems.

Example 2 – the un-supported configuration


Using the same IP addresses as example above;

(iqn.esx1) 192.168.1.1 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)


(iqn.esx1) 192.168.2.1 ← ISCSI Fabric 2 → 192.168.2.101 (iqn.dcs1-2)
(iqn.esx1) 192.168.1.2 ← ISCSI Fabric 1 → 192.168.1.101 (iqn.dcs1-1)
(iqn.esx1) 192.168.2.2 ← ISCSI Fabric 2 → 192.168.2.102 (iqn.dcs2-2)

Both Interfaces from ESX1 are connected to the same Interface on the DataCore Server.

QLogic network adaptors


Affects ESX 6.x and 5.5
When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor
Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.

Server Hardware
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a
Gen10 Smart Array Controller may lose connectivity to Storage Devices.

Search https://support.hpe.com/hpesc/public/home using keyword a00041660en_us

Affects ESX 6.x and 5.5


Virtual HBAs (vHBA) and other PCI devices may stop responding when using Interrupt
Remapping

See https://kb.vmware.com/s/article/1030265.

Serving and un-serving Virtual Disks


Affects ESX 6.x and 5.5
ESXi Hosts need to perform a rescan whenever Virtual Disks are unserved
Without a rescan on the Host, ESXi will continue to send SCSI commands to DataCore
Server Frontend Ports for LUNs that are no longer served. This causes the DataCore Server
to have to send back an appropriate ‘ILLEGAL_REQUEST’ SCSI response each time the
missing LUN is probed for by the Host. In extreme cases, when large numbers of Virtual
Disks are unserved, the amount of SCSI commands generated by this send-and-response
will significantly affect performance any Host that is using the Front End Port(s) for existing
Virtual Disks

See https://kb.vmware.com/s/article/2004605 and https://kb.vmware.com/s/article/1003988.

Page | 22 The Host Server – VMware ESXi Configuration Guide


Known issues

Affects ESX 6.5


Virtual machine hangs and issues ESXi host based VSCSI reset
Virtual machines with disk size larger than 256GB in VMFS5 and larger than 2.5TB in VMFS6,
you see below symptoms
 Host becomes unresponsive, cannot restart specific hung virtual machines.
 Host requires a reboot to restart the virtual machines
See https://kb.vmware.com/s/article/2152008.

VAAI
Affects ESX 6.x and 5.5
Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work if the volume
is not native VMFS-5 (i.e. it is converted from VMFS-3) or the partition table of the LUN
was created manually

See: https://kb.vmware.com/s/article/2048466

Affects ESX 6.x and 5.5


Not using VMware’s VAAI's ‘Atomic Test and Set / Compare and Write’ on ESX Hosts
may result in excessive ‘SCSI reservation requests’ between ESX Hosts which can lead
to performance degradation
When ‘significant’ numbers of VMs are running on a Virtual Disks this can lead to excessive
reservation conflicts between Hosts sharing the Virtual Disk and this can cause increased
I/O latency. Enable VMware’s VAAI ATS feature or reduce the number of running Virtual
Machines on any Virtual Disk displaying this behavior and ensure that ESX Hosts with the
‘closest’ IO path to the DataCore Server all access the same, shared Virtual Disk as this will
help to reduce the potential for excessive SCSI Reservation conflicts.

See: https://kb.vmware.com/s/article/1005009

Affects ESX 6.x and 5.5


Under ‘heavy’ load the VMFS heartbeat may fail with 'false' ATS miscompare message.
The ESXi VMFS 'heartbeat' used to use normal 'SCSI reads and writes' to perform its
function. A change in the heartbeat method - released in ESXi 5.5 Update 2 and ESXi 6.0 –
uses ESXi's VAAI ATS commands instead directly to the storage array (i.e. the DataCore
Server). DataCore Servers do not require (and so do not support) these ATS commands.
DataCore therefore recommend disabling the VAAI ATS heartbeat setting

See https://kb.vmware.com/s/article/2113956.

If ESXi Hosts are connected to other storage arrays contact VMware to see if it is safe to
disable this setting for these arrays.

VMotion
Affects ESX 6.x
VMs get corrupted on vVOL datastores after vMotion
When a VM residing on a vVOL datastore is migrated using vMotion to another host by
either DRS or manually, and if the VM has one or more of the following features enabled:

Page | 23 The Host Server – VMware ESXi Configuration Guide


Known issues

 CBT
 VFRC
 IOFilter
 VM Encryption
A corruption of data/backups/replicas and/or performance degradation is experienced after
vMotion.

See: https://kb.vmware.com/s/article/55800

VSphere Client and VSphere Web Client


Affects ESX 6.x and 5.5
Cannot extend datastore through vCenter Server
If a SANsymphony Virtual Disk served to more than one ESX Host is not using the same LUN
on all Front End paths for all Hosts and then has its logical size extended, vSphere may not
be able to display the LUN in its UI to then expand the VMware datastore.

This following VMware article provides steps to work around the issue.
https://kb.vmware.com/s/article/1011754

Notes
While SANsymphony will always attempt to match a LUN on all Host for a Virtual Disk, in
some cases it is not possible to do so – e.g. A vDisk is served to an ESXi host that already has
vDisks mapped and the vDisk had already been served to other ESXi hosts previously;
matching the LUN across all ESXi hosts may not be possible because this would conflict
with existing mappings for other vDisks.

Also see: Serving Virtual Disks - To more than one Host port on page 8

Affects ESX 6.x only


Active path information (I/O) missing after update to 6.0 Update 3
Paths will never report 'Active (I/O)' only as 'Active'. A fix is available from VMware.
See: https://kb.vmware.com/s/article/2149992
An example of the problem follows:

This is what the Paths should look like (see following page):

Page | 24 The Host Server – VMware ESXi Configuration Guide


Known issues

Notes
Use the ESXi 'esxtop' command (e.g. using either ‘d’ or ‘u’ switches) to show actual activity
on the expected paths and/or devices.

VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the
VMXNET3 network driver for Windows that was released with VMware Tools 10.3.0.
This release has been removed from the VMware Downloads page.

See: https://kb.vmware.com/s/article/57796

Affects ESX 6.x and 5.5


Ports are exhausted on Guest VM after a few days when using VMware Tools 10.2.0
VMware Tools 10.2.0 version is not recommended by VMware.

See: https://kb.vmware.com/s/article/54459

Microsoft Clusters in Virtual Machines


Affects ESX 6.x and 5.5
Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more
than one Front End mapping to each DataCore Server may cause unexpected loss of
access.
A fix is available from VMware.

See https://kb.vmware.com/s/article/2145663 for more information.

Affects ESX 6.x only


Unable to access filesystem for MSCS cluster nodes after vMotion.
A fix is available from VMware.

See https://kb.vmware.com/s/article/2144153.

Page | 25 The Host Server – VMware ESXi Configuration Guide


Known issues

Affects ESX 6.x and 5.5


The SCSI-3 Persistent Reserve tests fail for Windows 2012 Microsoft Clusters running in
VMware ESXi Virtual Machines.
This is expected.

See https://kb.vmware.com/s/article/1037959
Specifically read the 'additional notes' (under the section 'VMware vSphere support for
running Microsoft clustered configurations').

Affects ESX 6.x and 5.5


ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may
take a long time to start or during LUN rescan.

See https://kb.vmware.com/s/article/1016106.

Page | 26 The Host Server – VMware ESXi Configuration Guide


Appendix A
Preferred Server & Preferred Path settings

Without ALUA enabled


If Hosts are registered without ALUA support, the Preferred Server and Preferred Path
settings will serve no function. All DataCore Servers and their respective Front End (FE) paths
are considered ‘equal’.

It is up to the Host’s own Operating System or Failover Software to determine which


DataCore Server is its preferred server.

With ALUA enabled


Setting the Preferred Server to ‘Auto’ (or an explicit DataCore Server), determines the
DataCore Server that is designated ‘Active Optimized’ for Host IO. The other DataCore Server
is designated ‘Active Non-Optimized’.

If for any reason the Storage Source on the preferred DataCore Server becomes unavailable,
and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore
Server will be designated the ‘Active Optimized’ side. The Host will be notified by both
DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the
ALUA state of both DataCore Servers and act accordingly.

If the Storage Source on the preferred DataCore Server becomes unavailable but the Host
Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the
DataCore Server is unavailable but the FE and MR paths are all connected or if the Host
physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or
iSCSI Cable failure) then the ALUA state will not change for the remaining, ‘Active Non-
optimized’ side. However, in this case, the DataCore Server will not prevent access to the Host
nor will it change the way READ or WRITE IO is handled compared to the ‘Active Optimized’
side, but the Host will still register this DataCore Server’s Paths as ‘Active Non-Optimized’
which may (or may not) affect how the Host behaves generally.

Also see

Preferred Servers and Preferred Paths from:


https://docs.datacore.com/SSV-WebHelp/Port_Connections_and_Paths.htm

Page | 27 The Host Server – VMware ESXi Configuration Guide


Appendix A - Preferred Server & Preferred Path settings

In the case where the Preferred Server is set to ‘All’, then both DataCore Servers are
designated ‘Active Optimized’ for Host IO.

All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the
distance that the IO has to travel to the DataCore Server. For this reason, the ‘All’ setting is
not normally recommended. If a Host has to send a WRITE IO to a ‘remote’ DataCore Server
(where the IO Path is significantly distant compared to the other ‘local’ DataCore Server),
then the WAIT times accrued by having to send the IO not only across the SAN to the remote
DataCore Server, but for the remote DataCore Server to mirror back to the local DataCore
Server and then for the mirror write to be acknowledged from the local DataCore Server to
the remote DataCore Server and finally for the acknowledgement to be sent to the Host back
across the SAN, can be significant.

The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not
always clear cut. Testing is advised.

For Preferred Path settings it is stated in the SANsymphony Help:


A preferred front-end path setting can also be set manually for a particular virtual disk. In this
case, the manual setting for a virtual disk overrides the preferred path created by the
preferred server setting for the host.

So for example, if the Preferred Server is designated as DataCore Server A and the Preferred
Paths are designated as DataCore Server B, then DataCore Server B will be the ‘Active
Optimized’ Side not DataCore Server A.

In a two-node Server group there is usually nothing to be gained by making the Preferred
Path setting different to the Preferred Server setting and it may also cause confusion when
trying to diagnose path problems, or when redesigning your DataCore SAN with regard to
Host IO Paths.

For Server Groups that have three or more DataCore Servers, and where one (or more) of
these DataCore Servers shares Mirror Paths between other DataCore Servers setting the
Preferred Path makes more sense.

So for example, DataCore Server A has two mirrored Virtual Disks, one with DataCore Server
B, and one with DataCore Server C and DataCore Server B also has a mirrored Virtual Disk
with DataCore Server C then using just the Preferred Server setting to designate the ‘Active
Optimized’ side for the Host’s Virtual Disks becomes more complicated. In this case the
Preferred Path setting can be used to override the Preferred Server setting for a much more
granular level of control.

Page | 28 The Host Server – VMware ESXi Configuration Guide


Appendix B
Reclaiming storage from Disk Pools
How much storage will be reclaimed?
This is impossible to predict. SANsymphony can only reclaim Storage Allocation Units that
have no block-level data on them. If a Host write its data ‘all over’ its own filesystem, rather
than contiguously, the amount of storage that can be reclaimed may be significantly less
than expected.

Defragmenting data on served Virtual Disks


A VMFS volume cannot be defragmented. See: https://kb.vmware.com/s/article/1006810

Notes on SANsymphony's Reclamation feature


Automatic Reclamation
SANsymphony checks for any ‘zero’ write I/O as it is received by the Disk Pool and keeps track
of which block addresses they were sent to. When all the blocks of an allocated SAU have
received ‘zero’ write I/O, the storage used by the SAU is then reclaimed. Mirrored and
replicated Virtual Disks will mirror/replicate the ‘zero’ write I/O so that storage can be
reclaimed on the mirror/replication destination DataCore Server in the same way.

Manual Reclamation
SANsymphony checks for ‘zero’ block data by sending read I/O to the storage. When all the
blocks of an allocated SAU are detected as having ‘zero’ data on them, the storage used by
the SAU is then reclaimed.

Mirrored Virtual Disks will receive the manual reclamation ‘request’ on all DataCore Servers
involved in the mirror configuration at the same time and each DataCore Server will read
from its own storage. The Manual reclamation ‘request’ is not sent to replication destination
DataCore Servers from the source. Replication destinations will need to be manually
reclaimed separately.

Page | 29 The Host Server – VMware ESXi Configuration Guide


Appendix B – Reclaiming storage from Disk Pools

Reclaiming storage on the Host using VAAI


When used in conjunction with either VMware’s vmkfstools or their own esxcli command,
the ‘Block Delete/SCSI UNMAP’ VAAI primitive will allow ESXi Hosts (and their VMs) to trigger
SANsymphony's ‘Automatic Reclamation’ function.

Important Note
Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work if the volume
is not native VMFS-5 (i.e. it is converted from VMFS-3) or the partition table of the LUN
was created manually

See: https://kb.vmware.com/s/article/2048466

Space reclamation priority setting


DataCore recommend using the 'Low' space reclamation priority setting. Any other settings
could result in excessive I/O loads being generated on the DataCore Server (with large
numbers of SCSI UNMAP commands) and this may then cause unnecessary increases in I/O
latency.

Space reclamation granularity setting


DataCore recommend using 1MB.

Reclaiming storage on the Host manually


Create a new VMDK using ‘Thick Provisioning Eager Zero’
A suggestion would be to create an appropriately sized virtual disk device (VMDK) where the
storage needs to be reclaimed and ‘zero-fill’ it by formatting as a ‘Thick Provisioning Eager
Zero’ Hard Disk.

Using vSphere
Add a new ‘Hard Disk’ to the ESXi Datastore of a size less than or equal to the free space
reported by ESXi and choose ‘Disk Provisioning: Thick Provisioned Eager Zero’. Once the
creation of the VMDK has completed (and storage has been reclaimed from the Disk
Pool), this VMDK can be deleted.

Using the command line


An example:

vmkfstools -c [size] -d eagerzeroedthick /vmfs/volumes/[mydummydir]/[mydummy.vmdk]

Where ‘[size]’ is less than or equal to the free space reported by ESXi.

Once the creation of the VMDK has completed (and storage has been reclaimed from the
Disk Pool), this VMDK can be deleted.

For Raw Device Mapped Virtual Disks


Virtual Machines that access SANsymphony Virtual Disks as RDM devices may be able to
generate ‘all-zero’ write I/O patterns using the VM’s operating systems own tools. Examples
include ‘sdelete’ for Microsoft Windows VMs or ‘dd’ for UNIX/Linux VMs.

Page | 30 The Host Server – VMware ESXi Configuration Guide


Appendix B – Reclaiming storage from Disk Pools

Also see

ESXi 5.5 Hosts: Reclaiming Unused Storage Space


https://pubs.vmware.com/vsphere-
55/topic/com.vmware.vcli.examples.doc/cli_manage_files.5.6.html

ESXi 6.0 Hosts: Reclaiming Unused Storage Space


https://pubs.vmware.com/vsphere-
60/topic/com.vmware.vcli.examples.doc/cli_manage_files.5.6.html

ESXi 6.5 Hosts: Space Reclamation Requests from Guest Operating Systems
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-
5E1396BE-6EA8-4A6B-A458-FC9718E2C55B.html

Auto-reclamation of unused SAUs


https://docs.datacore.com/SSV-WebHelp/about_disk_pools.htm

Reclaiming Unused Virtual Disk Space in Disk Pools:


https://docs.datacore.com/SSV-WebHelp/reclaiming_virtual_disk_space.htm

Page | 31 The Host Server – VMware ESXi Configuration Guide


Appendix C
Moving from Most Recently Used to another PSP
1. On the DataCore Server: Unserve all Virtual Disks from the Host.
2. On the Host: Rescan all paths/disk devices so that the Virtual Disks are cleanly
removed.
3. On the Host: Remove the SATP Rule.
4. On the DataCore Server: Check the ALUA box for the Host.
5. On the DataCore Server: Re-serve all Virtual Disks back to the Host.
6. On the Host: Rescan all paths/disk devices so that the Virtual Disks are cleanly
removed.
7. On the Host: Configure the host for either Fixed or Round Robin Path Selection Policy
as appropriate:
For Round Robin (RR) see page 13
For Fixed see page 15

Also see

Changing a LUN to use a different Path Selection Policy (PSP)


https://kb.vmware.com/s/article/1036189

Changing multipath or ALUA support settings for hosts


https://docs.datacore.com/SSV-WebHelp/Multipath_Support.htm

Page | 32 The Host Server – VMware ESXi Configuration Guide


Previous Changes
July 2019
Added
Known Issues - VAAI
Affects ESX 6.x and 5.x
Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work if the volume is not native VMFS-5
(i.e. it is converted from VMFS-3) or the partition table of the LUN was created manually
See: https://kb.vmware.com/s/article/2048466

Updated
The DataCore Server’s settings – Port Roles

Removed
General
All information regarding SANsymphony-V 9.x as this version is end of life (EOL).
Please see: End of life notifications for DataCore Software products
https://datacore.custhelp.com/app/answers/detail/a_id/1329
For more information.

June 2019
Updated
Appendix B - Reclaiming Storage from Disk Pools
Defragmenting data on Virtual Disks

For ESXi a VMFS volume cannot be defragmented.


Please see VMware’s own Knowledgebase article: Does fragmentation affect VMFS datastores? -
https://kb.vmware.com/s/article/1006810

March 2019
Updated
The VMware ESXi Host's settings
Advanced settings – DiskMaxIOSize
An explanation has been added on why DataCore recommend that the default value for an ESXi host is changed.

Appendix B - Reclaiming Storage from Disk Pools


Reclaiming storage on the Host manually
This section now has vSphere-specific references for the manual method of creating a new VMDK using ‘Thick
Provisioning Eager Zero’.

December 2018
Updated
VMware Path Selection Policies – Round Robin PSP

Creating a custom SATP rule


A minor update to the explanation when changing the RR IOPs value. It is now clearer when a change to the rule
would or would not be expected to be persistent over reboot of the ESXi Host – see the ‘Notes’ section under the
example.

November 2018
Updated
VMware ESXi compatibility lists - VMware Site Recovery Manager (SRM)

ESXi SANsymphony 9.0 PSP 4 Update 4 (1) SANsymphony 10.0 (all versions)

Page | 33 The Host Server – VMware ESXi Configuration Guide


Previous changes

6.5 Not Supported Tested/Works

6.7 Not Supported Not Qualified

ESX 6.5 is now supported using DataCore’s SANsymphony Storage Replication Adaptor 2.0. ESX 6.7 is currently not
qualified.

Please see DataCore’s SANsymphony Storage Replication Adaptor 2.0 release notes from
https://datacore.custhelp.com/app/downloads

October 2018
Added
Known Issues - Failover
Affects ESX 6.7 only
Failover/Failback takes significantly longer than expected.
Users have reported to DataCore that before applying ESXi 6.7, Patch Release ESXi-6.7.0-20180804001 (or later)
failover could take in excess of 5 minutes. DataCore are recommending (as always) to apply the most up-to-date
patches to your ESXi operating system. Also see: https://kb.vmware.com/s/article/56535

VMware ESXi compatibility lists - ESXi operating system versions


Added to the ‘Notes’ section:
iSER (iSCSI Extensions for RDMA) is not supported.

Updated
VMware ESXi compatibility lists
VMware VVOL VASA API 2.0

ESXi 9.0 PSP 4 Update 4 10.0 PSP 3 and earlier 10.0 PSP 4 and later

5.x Not VVOL/VASA compatible

Previously ESX 5.x was incorrectly listed as VVOL/VASA compatible with 10.0 PSP 4 and later.

September 2018

Added
Known Issues
VSphere Client and VSphere Web Client
Affects ESX 6.x and 5.x
Cannot extend datastore through vCenter Server
If a SANsymphony Virtual Disk served to more than one ESX Host is not using the same LUN on all Front End
paths for all Hosts and then has its logical size extended, vSphere may not be able to display the LUN in its UI to
then expand the VMware datastore. This article provides steps to work around the issue.
See: https://kb.vmware.com/s/article/1011754

VMware Tools
Affects ESX 6.5
VMware Tools Version 10.3.0 Recall and Workaround Recommendations
VMware has been made aware of issues in some vSphere ESXi 6.5 configurations with the VMXNET3 network
driver for Windows that was released with VMware Tools 10.3.0.
As a result, VMware has recalled the VMware Tools 10.3.0 release. This release has been removed from the VMware
Downloads page - see: https://kb.vmware.com/s/article/57796

Page | 34 The Host Server – VMware ESXi Configuration Guide


Previous changes

Affects all guest VMs


Ports are exhausted on Guest VM after a few days when using VMware Tools 10.2.0
VMware Tools 10.2.0 version is not recommended by VMware - see: https://kb.vmware.com/s/article/54459

VMware Path Selection Policies


Round Robin PSP
Added an important note after the custom SATP rule example:
It is possible to adjust any SATP rule using esxcli commands on a running system without any other configuration
changes. However, done this way, the settings are not persistent and the next reboot will revert the rule back to
what it was before the esxcli command was made. Also see https://kb.vmware.com/s/article/2069356

August 2018

Added
Known Issues
Affects ESX 6.5
HPE ProLiant Gen10 Servers Running VMware ESXi 6.5 (or Later) and Configured with a Gen10 Smart Array
Controller may lose connectivity to Storage Devices.
Search https://support.hpe.com/hpesc/public/home using keyword a00041660en_us

VMware ESXi compatibility lists


VMware Site Recovery Manager (SRM)

ESXi SANsymphony 9.0 PSP 4 Update 4 (2) SANsymphony 10.0 (all versions)

5.x Tested/Works Tested/Works

6.0 Tested/Works Tested/Works

6.5 or later Not Supported Not Qualified

Updated
Appendix B - Reclaiming storage from Disk Pools
Reclaiming storage on the Host using VAAI - Space reclamation granularity setting
DataCore now recommend using a setting of 1MB.

Previously the recommendation was 4MB (to reflect the smallest possible Disk Pool SAU size) however VMware
disable UNMAP commands to the storage if this setting is greater than 1MB. See VMware’s own documentation
‘Thin Provisioning and Space Reclamation/ Storage Space Reclamation/ Space Reclamation Requests from VMFS
Datastores’ for more information.

July 2018

Added
VMware ESXi compatibility lists - VSphere 6.5 – Storage IO Control

ESXi SANsymphony 9.0 PSP 4 Update 4 SANsymphony 10.0 (all versions)


6.5 and later Not Supported Tested/Works

Known Issues - VMotion


Affects ESX 6.x: VMs get corrupted on vVOL datastores after vMotion
Also see: https://kb.vmware.com/s/article/55800

2
Earlier versions are ‘End of Life’. See: https://datacore.custhelp.com/app/answers/detail/a_id/1329

Page | 35 The Host Server – VMware ESXi Configuration Guide


Previous changes

May 2018
Added
The VMware ESXi Host's settings – Advanced Settings
Enable VM Component Protection (VMCP)
VMware ESXi 6.x only. Enable VM Component Protection (VMCP) on your HA cluster to allow the cluster to react to
“all paths down” and “permanent device loss” conditions by restarting the VMs.

Known Issue – affects all versions of ESX

ESXi hosts need to perform a rescan whenever Virtual Disks are unserved
See https://kb.vmware.com/s/article/2004605 and https://kb.vmware.com/s/article/1003988 . Without a rescan on
the Host, ESXi will continue to send SCSI commands to DataCore Server Frontend Ports for LUNs that are no
longer served. This causes the DataCore Server to have to send back an appropriate ‘ILLEGAL_REQUEST’ SCSI
response each time the missing LUN is probed for by the Host. In extreme cases, when large numbers of Virtual
Disks are unserved, the amount of SCSI commands generated by this send-and-response will significantly affect
performance any Host that is using the Front End Port(s) for existing Virtual Disks

Updated
VMware Path Selection Policies - Configuring the Round Robin Path Selection Policy
While it is still possible to use VMware ESXi’s built-in, generic 'VMW_SATP_ALUA' rule, e.g.:

VMW_SATP_ALUA system tpgs_on Any array with ALUA support

DataCore are now recommending that this custom SATP rule be used instead:

esxcli storage nmp satp rule add -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P
VMW_PSP_RR -O iops=10

Important notes for users that were already using the previously documented custom SATP rule:
The difference between the old and new custom rule is the addition of the -O iops=10
Remove the existing custom rule before trying to add the new one e.g:

esxcli storage nmp satp rule remove -V DataCore -M "Virtual Disk" -s VMW_SATP_ALUA -c tpgs_on -P
VMW_PSP_RR

While it is possible to adjust the existing rule using the command line - see
https://kb.vmware.com/s/article/2069356 - this is not persistent over a reboot, therefore DataCore do not
recommend following the command in the VMware KB article.

February 2018
Added
The VMware ESXi Host's settings – Advanced Settings

Adjust the Round Robin IOPS limit


Adjusting the limit down can provide a positive impact to performance when using the Round Robin Path
Selection Policy. DataCore recommends changing the IOPS down to a value of 10 from the default value of 1000.
This will allow I/O to DataCore Virtual Disk paths to be switched at a more frequent rate.

Known Issues – ESXi 6.x and 5.x


Applies to SANsymphony versions 10.0 PSP 6 Update 5 and earlier
ESXi Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end (FE) port
is not supported (this also includes ESXi 'Port Binding'). Note: This 'known issues' was previously documented
under 'The VMware ESXi Host's settings - ISCSI Connections' section.

Updated
General
This document has been reviewed for SANsymphony 10.0 PSP 7.

Page | 36 The Host Server – VMware ESXi Configuration Guide


Previous changes

Removed
General – ESXi 4.x and earlier
All references to ESXi version 4.x have now been removed as this product is considered end of technical guidance
from VMware. Note: SANsymphony-V 9.0 PSP 4 Update 4 is still considered qualified with ESXi 4.1.x. and earlier
versions of ESXi are all considered as not supported. However, if there is still a specific requirement to use ESXi 4.1
with SANsymphony-V 9.0 PSP 4 Update 4, then contact DataCore Technical Support who will be able to give
advice on any relevant information that has now been removed from this document.

Appendix B – Configuring Disk Pools


The information here has been removed as it is now superseded by the information in:
The DataCore Server- Best Practice Guidelines https://datacore.custhelp.com/app/answers/detail/a_id/1348
What was previously 'Appendix C' has now been moved to 'Appendix B' and so on.

October 2017
Updated
When connecting ESXi Hosts to DataCore Servers
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported in SANsymphony versions 10.0 PSP 6 Update 5 and earlier (this also includes ESXi 'Port Binding').
Please see the ISCSI Connections section on for more specific information, with examples.

VMware 'Fault Tolerant' or 'High Available' Clusters


The information has been moved to the 'Known Issues' section instead.
Sharing the same physical connection for Host Front end and DataCore Mirror Ports may result in unexpected
behavior when a failure occurs on that physical connection…

August 2017
Added
Known Issues – applies to all versions of ESXi
ESXi hosts experience degraded IO performance on iSCSI network when Delayed ACK is 'enabled' on ESXi its
software iSCSI initiator.
See https://kb.vmware.com/s/article/1002598 for more specific information and how to disable the 'Delayed ACK'
feature on ESXi Hosts. A reboot of the ESXi Host will be required.

Updated
Appendix C - Reclaiming storage
Added updated information specific to ESX 6.5 with regard to VMware's 'Space Reclamation Requests from Guest
Operating Systems' feature with VMFS6.

June 2017
Updated
Compatibility List – VMware ESXi Path Selection Policies
There was an inconsistency between what was reported in the table and the compatibility notes. Previously the
MRU PSP had stated, in the table, that it was 'Qualified' with ESX versions 4.x and 5.x. However the compatibility
notes stated that MRU (while not on VMware's Hardware Compatibility List), was actually considered 'Not Qualified'
by DataCore Software, rather than 'Not Supported'. The table has now been corrected to reflect the compatibility
notes.

Known Issues – VMware ESXi Path Selection Policies


After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as 'Active
(I/O)' - regardless of the Path Selection Policy. A VMware KB article has now been published for this issue -
https://kb.vmware.com/s/article/2149992

May 2017
Added
Known Issues – all ESX versions – When connecting ESXi Hosts to DataCore Servers

Page | 37 The Host Server – VMware ESXi Configuration Guide


Previous changes

After upgrading to VMware ESXi 6.0 Update 3 ESX paths will only report as 'Active'. No paths will report as 'Active
(I/O)' - regardless of the Path Selection Policy.

April 2017
Added
Known Issues – all ESX versions – Converged Network Adaptors
When using QLogic's Dual-Port, 10Gbps Ethernet-to-PCIe Converged Network Adaptor (CNA)
Disable both the adaptor's BIOS and the 'Select a LUN to Boot from' option.
This was previously documented in 'Known Issues - Third Party Hardware and Software'
https://datacore.custhelp.com/app/answers/detail/a_id/1277

Updated
VMware ESXi Compatibility lists – VMware ESXi Path Selection Policies (PSP)
The information regarding the Most Recently Used (MRU) PSP and ESXi 6.x. was incorrectly listed as 'Supported'. It
has been corrected to 'Not Qualified'.

February 2017
Added
VMware ESXi compatibility notes
VMware 'Fault Tolerant' or 'High Available' Clusters
Explained a specific configuration set up that DataCore cannot support when using VMware FT or HA clusters and
the reasons for that. This is also referred to again in the 'Known Issues' section.

November 2016
Updated
Appendix C - Reclaiming storage
Automatic and Manual reclamation
These two sections have been re-written with more detailed explanations and technical notes.

October 2016
Updated
The VMware ESXi Host's settings - ISCSI Connections
ESX Hosts with IP addresses that share the same IQN connecting to the same DataCore Server Front-end port is
not supported (this also includes ESXi 'Port Binding'). The supported configuration example has been updated to
make it more obvious as to what is required (along with the same, corresponding changes made to the
unsupported example so that the comparison is easy to spot).

September 2016
Added
Known Issues - general
There has been a general re-organization of this section separating all issues into subsections determined by the
version of ESXi that the known issue refers to.

Known Issues - 6.x


Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access
This has now been fixed in https://kb.vmware.com/s/article/2145663 VMware ESXi 6.0, Patch Release ESXi600-
201608001 and was documented previously in VMware’s own internal SR#15597438602.

Known Issues - 5.5


Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access
This affects ESX 5.5. This is documented in VMware’s own internal SR#15597438602. Please contact VMware directly
about this as DataCore is not aware of any fix for ESXi 5.5 at this time.

Updated
The VMware ESXi Host's settings – ISCSI Connections
The information that was previously in the 'Known Issues' section regarding connections from multiple NICs
sharing the same IQN has been moved to this section as it affects all versions of ESX and is not so much a 'Known
Issue' than a configuration requirement.

Known Issues – ESX 6.x

Page | 38 The Host Server – VMware ESXi Configuration Guide


Previous changes

Unable to access filesystem for MSCS cluster nodes after vMotion


https://kb.vmware.com/s/article/2144153 A fix is available (as well as a workaround) from the VMware knowledge
base article above.

August 2016
Added
Known Issues
ESXi/ESX hosts with visibility to RDM LUNs being used by MSCS nodes with RDMs may take a long time to start or
during LUN rescan. Applies to ESX 6.x, 5.x and 4.x. Please see: https://kb.vmware.com/s/article/1016106

July 2016
Added
The DataCore Server's settings
Added link:
Video: Configuring ESX Hosts in the DataCore Management Console
https://datacore.custhelp.com/app/answers/detail/a_id/1637

Updated
This document has been reviewed for SANsymphony 10.0 PSP 5.

VMware ESXi compatibility lists


ESX 4.1 is now 'not supported' for SANsymphony 10.x – previously listed as 'unqualified'.
Because ESX 4.x is considered by VMware to be 'End of Availability' (https://kb.vmware.com/s/article/2039567)
DataCore would not be able to get assistance from VMware if it were needed for any issues that were found during
'Self Qualification'.

Known Issues
vMotion causing loss of access to filesystem for MSCS cluster nodes (2144153)
This was previously listed as "Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have
more than one Front End mapping to each DataCore Server may cause unexpected loss of access". A
Knowledgebase article has now been released by VMware https://kb.vmware.com/s/article/2144153

April 2016
Updated
Known Issues - VMware 6.0
Storage PDL responses may not trigger path failover in vSphere 6.0
https://kb.vmware.com/s/article/2144657.
Note: This affects both vSphere 6.0 and 6.0 U1 customers. A fix is available in 6.0 U2.

February 2016
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Removed references specific to 'End of Life' versions of SANsymphony-V – this includes all versions of
SANsymphony-V 8.x and any version of 9.x that are PSP 3 or earlier.

December 2015
Updated
List of qualified VMware Versions - Qualification notes on VMware-specific functions
Path Selection Policies and VMware ESX 6.x
For ESX 6.x, Fixed and Round Robin Path Selection Policies are both tested and supported by DataCore and both
are also listed on VMware's own Hardware Compatibility List.

VSphere APIs for Storage Awareness (VASA)


For ESX 6.x, VASA is tested and supported by DataCore and is also listed in VMware's own Compatibility Guide.

VSphere APIs for Virtual Volumes (VVOL)


For ESX 6.x, VVOL is tested and supported by DataCore and is also listed in VMware's own Compatibility Guide.

November 2015
Updated

Page | 39 The Host Server – VMware ESXi Configuration Guide


Previous changes

SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now ‘End of Life’. Please see:
End of Life Notifications https://datacore.custhelp.com/app/answers/detail/a_id/1329

October 2015
Updated
Known Issues – VMware ESXi 5.x and 6.x
DataCore have been informed that there is now a ‘hotfix’ from VMware for the previously documented known issue
“Running Microsoft Cluster Services in a Virtual Machine on Virtual Disks that have more than one Front End
mapping to each DataCore Server may cause unexpected loss of access” (VMware’s own SR#15597438602). Contact
VMware for more information.

July 2015
Added
List of qualified VMware ESXi Versions - Notes on qualification
This section has been updated and new information added regarding the definitions of all ‘qualified’, ‘unqualified’
and ‘not supported…’ labels. A new section on Linux distributions that are no longer in development has also been
added at the end of this section.

Known Issues
Moved some of the information from the Host Configuration section where problems can arise into the ‘Known
Issues’ section. The ISCSI Port Binding is no longer considered supported as even if configured to be used in
different subnets (as previously recommended) the sharing of IQNs for different iSCSI Initiators on the ESXi Hosts
cannot be avoided and this can lead to situations where different IP Addresses with the same IQN try to log into
the same DataCore FE Port and will not be able to. Please read the ‘Known Issues’ section for more detail.

May 2015
Added
Known Issues – VMware ESXi 5.x and 6.x
An issue has been identified by VMware regarding Microsoft Clusters in Virtual Machines using SANsymphony-V
Virtual Disks served to more than one path on the same ESX host, which lead to unexpected loss of access. Under
‘heavy’ load the new VMFS heartbeat process used by ESX 5.5. Update 2 and 6.x may fail with false ‘ATS
miscompare’ message.

Updated
VMware ESXi 6.x - generally
Sections that apply to only VMware ESXi 6.x have been explicitly labelled to avoid ambiguity.

April 2015
Added
VMware ESXi Path Selection Policies (all)
It has been observed that different versions of ESXi may or may not ‘auto configure’ the correct SATP claim rule for
Round Robin or Fixed Path Selection Policies when presented with a Virtual Disks from SANsymphony-V. Therefore
more explicit instructions on how to create a custom rules has been added.

Note: Existing SANsymphony-V installations probably do not need to worry about this new information as it does
not conflict with what was stated previously; but DataCore recommend that you review the section just to make
sure that your Virtual Disks are correctly configured.

Updated
List of qualified VMware ESXi Versions
Added VMware ESXi 6.x

Host Settings - VMware ESXi All versions


ISCSI Port binding
Clarified the statement regarding using same subnets as the VMKERNEL port.

Configuring VMware ESXi Path Selection Policies


General Notes – this section has been re-ordered. No new information has been added.

Page | 40 The Host Server – VMware ESXi Configuration Guide


Previous changes

2014 and earlier


December
Added
DataCore Server Settings
Installing a DataCore Server ‘inside’ a VMware ESXi Virtual Machine
VMware ESXi Path Selection Policies
Which Path Selection Policy does DataCore Software Recommend?
Added some explanation on a frequently asked question based on the differences between Fixed and Round Robin
Path Selection Policies.

The ‘Preferred Server’ setting when using the … PSP


Added more detailed explanation regarding the SANsymphony-V ‘Preferred Server’ setting and how it applies to
each of the three supported Path Selection Policies (Round Robin, Fixed and MRU).

Updated
Appendix D - Moving from Most Recently Used to Round Robin or Fixed Path Selection Policies
Added more information about how to reduce the likelihood for downtime (by using vMotion).

November
Added
Known Issues
Most of the information was moved from the ‘Known Issues: Third Party Hardware and Software’ document:
https://datacore.custhelp.com/app/answers/detail/a_id/1277

Updated
List of qualified VMware ESXi versions
‘Not Supported’ has now been changed to mean explicitly ‘Not Supported for Mirrored or Dual Virtual Disks’. Single
Virtual Disks are now always considered supported.

Appendix B: Reclaiming Storage from Disk Pools


For ESXi 5.5 Hosts, the command to reclaim VMFS deleted blocks has changed since earlier versions of ESXi 5.x. A link
to the appropriate VMware KB article for the later version of ESXi has therefore been added.

July
Updated
VMware ESXi Path Selection Policies – all types
The command to verify that a given SATP type had been set was incorrect for the later versions of VMware ESXi. It
was listed as:
esxcli nmp satp listrules -s [SATP_Type]
and should have been listed as:
esxcli storage nmp satp rule list -s [SATP_Type]

VMware ESXi Path Selection Policies – Fixed


Added clarifying notes at the start of this section, as the specific requirements for the Host Settings within the
SANsymphony-V Management Console, using the Fixed Path Selection Policy with VMware ESXi, contradict the
general statement (for all other Host Operating Systems) in the SANsymphony-V Release Notes regarding use the
‘All’ setting for the Preferred Server setting.

June
Updated
List of qualified VMware ESXi Versions
Updated to include SANsymphony-V 10.x

May
This document combines all of DataCore’s VMware information from older Technical Bulletins into a single
document including:

Technical Bulletin 5b: “VMware ESXi vSphere 4.0.x Hosts”.


Technical Bulletin 5c: “VMware ESXi vSphere 4.1.x Hosts”.
Technical Bulletin 5d: “VMware ESXi vSphere 5.x Hosts”.
Note: Technical Bulletin 5a: “VMware ESXi 2.x and 3.x Hosts” contains versions not supported with SANsymphony-V,
so the information is not relevant to this document and has not been included.
Technical Bulletin 8: “Formatting Host’s File Systems on Virtual Disks created from Disk Pools”.
Technical Bulletin 11: “Disk Timeout Settings on Hosts”.
Technical Bulletin 16: “Reclaiming Space in Disk Pools”.

Page | 41 The Host Server – VMware ESXi Configuration Guide


Previous changes

Added
Host Settings: VMware ESXi All Versions:
Notes on VMware iSCSI Port Binding

VMware ESXi Path Selection Policies:


‘Fixed AP’ is no longer included as this is not a supported Path Selection Policy with SANsymphony-V.

‘Fixed’ is supported (this was inconsistently documented across the different Technical Bulletins) but only with the
Preferred Server setting set to ‘All’.

‘Most Recently Used’ must only be used without the ALUA option set on the Host. However, no versions of VMware
ESXi, without the ALUA option set, have been qualified with SANsymphony-V, so this Path Selection Policy is
considered ‘unqualified’.

Appendix A: This section gives more detail on the Preferred Server and Preferred Path settings with regard to how it
may affect a Host.

Appendix B: This section incorporates information regarding “Reclaiming Space in Disk Pools” (from Technical
Bulletin 16) that is specific to VMware Hosts.

Appendix C: This section adds additional information regarding “VMware’s vStorage APIs for Array Integration (VAAI)
with SANsymphony-V”.

Appendix D: This section adds more comprehensive steps for “Moving from Most Recently Used to Fixed or Round
Robin Path Selection Policy”.

Updated
DataCore Server Settings: VMware ESXi 4.0.x Hosts: Regarding Virtual Disk Names.
Host Settings: SCSI Reservation locking between VMware ESXi Hosts.

VMware ESXi Path Selection Policies: Previously the Preferred Server setting of ‘All’ was explicitly stated to not be
used within the SANsymphony-V Management Console. However, ‘Fixed’ requires that the Host’s Preferred Server
setting is set to ‘All’. ‘Round Robin’ may use the ‘All’ setting although caution is advised and more explanation is
provided in Appendix A why it may not be advisable.

An overall improvement of the explanations to most of the required Host Settings and DataCore Server Settings.

Technical Bulletin 5d: “VMware ESXi vSphere 5.x Hosts”

January 2014
Updated
The note on how to move from ‘Most Recently Used’, with the ALUA option not checked to ‘Fixed’/RR Path with the
ALUA option checked for a DataCore Disk with regard to SANsymphony-V 9.0 PSP3 and later versions.

December 2013
Added
VSphere ESXi 5.5 is qualified and no additional settings (from all previous 5.x versions) are needed. The SCSI UNMAP
primitive is supported from SANsymphony-V 9.0 PSP4.

Updated
DataCore Server configuration settings section (‘Virtual Disks mapped to more than one Host may need to use the
same LUN ‘number’ …’) for SANsymphony-V. Added a ‘warning’ note at the start of each Path Selection Policy (PSP),
cautioning the user, that a VM’s Operating System configuration may not be supported by VMware for a particular
PSP (i.e. of publication VMware state that MSCS VMs are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this product is now ‘End of Life’ as of December 31, 2012

March 2013
Added
Use VMFS5 for VSphere Metro Storage Clusters (vMSC).

February 2013
Updated

Page | 42 The Host Server – VMware ESXi Configuration Guide


Previous changes

The ‘General notes on path selection policies’. To allow for different behavior with the VMware vCenter Integration
function of SANsymphony-V.

October 2012
Removed
All but one of the Advanced Settings; all other settings are no longer needed and can be ignored (there is no
requirement to reset or change the existing values for these other settings and they can be left as they are).

July 2012
Added
support for SANsymphony-V 9.x, no new technical information. Added extra steps to set the default path selection
policy to ‘Fixed’ instead of ‘MRU’ under the ‘Fixed/Round Robin path selection policy’ section. Added note under
‘General’ section that:
i. VAAI is now supported - with SANsymphony-V 9.x and ESXi 5.x.
ii. Strengthened warning that MRU is not supported with ALUA

June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

May 2012
Updated
The DataCore Server and Host minimum requirements.

Removed
All references to ‘End of Life’ versions that are no longer supported as of December 31 2011. Updated notes at the
start of ‘General notes for Path Selection Policies’. Updated copyright. Added note to ‘General notes on path
selection policies for ESXi 5.x’ on selecting the preferred path of Virtual Disk with multiple connections for
VMW_PSP_FIXED to the same DataCore Server.

December 2011
Initial publication of Technical Bulletin.

Technical Bulletin 5c: “VMware ESXi vSphere 4.1.x Hosts”

June 2013
Added
A ‘warning’ note at the start of each Path Selection Policy (PSP), cautioning the user, that a VM’s Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS VMs
are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this is now ‘End of Life’ of December 31 2012. Updated the DataCore Server
Configuration Settings added Preferred Server notes.

July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Added notes under ‘General’ section that:
i. VAAI is not supported with SANsymphony-V and ESXi 4.1.
ii. Strengthened warning that MRU is not supported with ALUA

June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

May 2012
Updated
The DataCore Server and Host minimum requirements. Removed all references to ‘End of Life’ SANsymphony and
SANmelody versions that are no longer supported as of December 31 2011. Added notes at the start of ‘General’ notes
for Path Selection Policies. Updated copyright. Updated ‘Fixed AP and Round Robin Path Selection Policy’ with
regard to ‘preferred paths’. Existing users should re-check their configurations and make any appropriate changes
as necessary.

November 2011
Updated

Page | 43 The Host Server – VMware ESXi Configuration Guide


Previous changes

URL to VMware SAN Configuration guides changed.

October 2011
Removed
All references to ‘End of Life’ SANsymphony and SANmelody versions that are no longer supported as of July 31 2011.
Moved known issues out of this Technical Bulletin and into the ‘Known Issues: Third Party Software/Hardware with
DataCore Servers’ document. Added MRU path policy. Added important note on how to verify path selection policy
in each case. For SANsymphony-V the first 12 characters of the Virtual Disk name no longer needs to be unique.

February 2011
Added
Support for SANsymphony-V 8.x.

September 2010
Initial publication of Technical Bulletin.

Technical Bulletin 5b: “VMware ESXi vSphere 4.0.x Hosts”

June 2013
Added
A ‘warning’ note at the start of each Path Selection Policy (PSP), cautioning the user, that a VM’s Operating System
configuration may not be supported by VMware for a particular PSP (i.e. of publication VMware state that MSCS VMs
are not supported for Round Robin PSP).

April 2013
Removed
All references to SANmelody as this product is now ‘End of Life’ as of December 31, 2012

July 2012
Added
Support for SANsymphony-V 9.x. No new settings required. Corrected option for SCSI.CRTimeoutDuringBoot and
added back SCSI.ConflictRetries in ESXi(i) Host configuration settings - General.

June 2012
Added
Two new settings to be applied under the ‘General’ section for the Hosts (Disk.UseLunReset Disk.UseDeviceReset).

November 2011
Updated
URI to VMware SAN Configuration guides changed.

October 2011
Removed
All references to ‘End of Life versions that are no longer supported as of July 31 2011. Moved all issues not specific to
configuring Hosts or DataCore Servers out of this Technical Bulletin and into the ‘Known Issues: Third Party
Software/Hardware with DataCore Servers’ document. Added important note on how to verify path selection policy
in each case. Changed requirement for Most Recently Used managed path policy – do not use the ‘ALUA’ option.

March 2011
Added
Support for SANsymphony-V 8.x

June 2010
Added
Support for 'Round-Robin' path selection policy with SANsymphony 7.0 PSP 3 Update 4 and SANmelody 3.0 PSP 3
update 4.

December 2009
Added
Support for 'Fixed Path' path selection policy with SANsymphony 7.0 PSP 3 and SANmelody 3.0 PSP 3. Previously
only MRU was supported

October 2009
Initial publication of Technical Bulletin

Page | 44 The Host Server – VMware ESXi Configuration Guide


The authority on real-time data

Copyright © 2019 by DataCore Software Corporation. All rights reserved.

DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software Corporation. Other DataCore
product or service names or logos referenced herein are trademarks of DataCore Software Corporation. All other
products, services and company names mentioned herein may be trademarks of their respective owners.

ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS”
AND USERS MUST TAKE ALL RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND
THE INFORMATION CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY
EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND SHALL HAVE NO
LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER
INFORMATION REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST
HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT PERMITTED BY LAW.

No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-
readable form without the prior written consent of DataCore Software Corporation

Вам также может понравиться