Вы находитесь на странице: 1из 33

Best Practice Guidelines

SANsymphony-V
September 2015

Global Leader in Storage Virtualization Software


Table of contents
Overview 3
Who is this document for? 3
Which version of SANsymphony-V does this apply to? 3
Recent changes made to this document 4
Previous changes made to this document 4
High level design objectives 5
Hardware configuration recommendations for the DataCore Server 7
Memory 7
CPU 8
The number of CPUs for a DataCore Server 9
Power 10
RAID controllers and Storage Arrays 11
Fibre Channel connections 11
ISCSI connections 12
TCP/IP Network Topology 13
Summary of hardware configuration recommendations 15
Software configuration recommendations for the DataCore Server 17
The Windows operating system 17
Windows service packs, updates, security and hot fixes 19
Upgrading to newer versions of the Windows (including ‘R2’ versions) 20
Upgrading the SANsymphony-V software 20
Third-party software 21
ISCSI connections 22
Other required TCP/IP connections 22
Summary of software configuration recommendations 24
Replication recommendations 26
The Replication buffer 26
The Replication network links 28
Other Replication considerations 29
Summary of Replication recommendations 31
Previous Changes 32

Page | 2 Best Practice Guidelines


Overview
Each DataCore implementation will always be unique and giving advice that applies to
every installation is therefore difficult. The information in this document should be
considered as a set of ‘general guidelines’ and not necessarily strict rules.

DataCore cannot guarantee that following these best practices will result in a perfect
solution – there are simply too many separate (and disparate) components that make up a
complete SANsymphony-V installation, most of which are out of DataCore Software’s
control – but these guidelines will significantly increase the likelihood of a more secure,
stable and high-performing SANsymphony-V installation.

Who is this document for?


This guide assumes that DataCore-specific terms - e.g. Virtual Disk, Disk Pool, CDP and
Replication etc. including their respective functions – will be understood.

It does not replace DataCore technical training nor does it remove any need to use a
DataCore Authorized Training Partner and assumes fundamental knowledge of Microsoft
Windows and the SANsymphony-V suite of products.

Also see
DataCore Training
http://www.datacore.com/Support/Training.aspx

How to become a DataCore Certified Implementation Engineer


http://datacore.custhelp.com/app/answers/detail/a_id/1301

Which version of SANsymphony-V does this apply to?


Where a recommendation applies to a particular version of SANsymphony-V, it will be
indicated within the text otherwise all versions of SANsymphony-V 9.x and 10.x are
considered for this document.

SANsymphony-V 8.x is now end of life.

Also see
End of Life Notifications
http://datacore.custhelp.com/app/answers/detail/a_id/1329

Page | 3 Best Practice Guidelines


Recent changes made to this document
New information added since last update (August 2015)

 Added a new section:


Replication recommendations
This section details best practices for the asynchronous Replication feature.

Previous changes made to this document


Please see page 32.

Page | 4 Best Practice Guidelines


High level design objectives
The design objective of any SANsymphony-V configuration is always an ‘end-to-end’ process
from the user to the data and in terms of availability, redundancy and performance these
cannot only be considered individually as are not isolated functions but integrated parts of
the entire infrastructure. The following information provides some high level
recommendations which are often overlooked or forgotten about.

Keep it simple
Avoid complexity at all levels.
Overly complicated environments may appear to cover all possible situations, but the more
complex the design, the more likely unforeseen errors will occur and can make the system
difficult to maintain and support especially when adding to it in future.

A simple approach is recommended whenever possible.

‘Separation’ is the key


Avoid single points of failure
Dependencies between major components can impact a whole environment such as
switches, fabrics, storage, etc. any of which can become a SPOF.

• Distribute components across separate racks, in separate rooms, buildings and


even ‘across sites’.
• Keep storage components away from the public network.
• Avoid connecting redundant devices to the same power circuit.
• Use redundant UPS protected power sources and connect every device to it. A
UPS back-up does not help much if it fails to notify a Host to shut down because
that Host’s management LAN switch itself was not connected to the UPS backed
power circuit.
• Use redundant network infrastructures and protocols where the failure of one
component does not make access unavailable to others.
• Do not forget environmental components. A failed air conditioner may collapse
all redundant systems located in the same datacenter. Rooms on the same floor
may be affected by a single burst water pipe (even though they are technically
separated from each other).

Page | 5 Best Practice Guidelines


High level design objectives

Monitoring and event notification


Monitoring and notification
Highly available systems keep services alive even if half of the environment has failed but
these situations must always be recognized and responded to in a timely manner so that
they can be fixed as soon as possible by the personnel responsible to avoid further
problems.

Knowledge
Document and share information
Document the environment properly; keeping it up-to-date and accessible. Establish
'shared knowledge' between at least two people who have been trained and are familiar
with all areas of the infrastructure.

Control user access


Not all servers are the same
Make sure that the difference between a 'normal' server and a ‘DataCore Server’ is
understood. A DataCore Server should only be operated by a trained technician.

Best Practices are not necessarily pre-requisites


Some of the Best Practices listed here may not be applicable to your installation or cannot
be applied because of the physical limitations of your installation – for example, it may not
be possible to install any more CPUs or perhaps add more PCIe Adaptors or maybe your
network infrastructure limits the number of connections possible between DataCore
Servers and so on.

Therefore, each set of recommendations is accompanied by a more detailed explanation so


that when there is a situation where a best practice cannot be followed, there is an
understanding of what and how that may limit your SANsymphony-V installation.

Page | 6 Best Practice Guidelines


Hardware configuration
recommendations for the
DataCore Server
For DataCore Servers that are running inside a Virtual Machine (e.g. within a VM managed
by VMware or Windows using Hyper-V), a few of the hardware recommendations may
appear to be not applicable, however it may be that the hypervisor host’s own server
hardware may need to be ‘reconfigured’ instead. This will be indicated at the end of the
sections where DataCore think this approach to be appropriate.

Memory
Amount of memory
The minimum amount of memory for a DataCore Server is 8GB. The recommended amount
of memory for a DataCore Server depends entirely on the size and complexity of the
SANsymphony-V configuration. Use the ‘DataCore Server Memory Considerations’
document available from the Support Website;
http://datacore.custhelp.com/app/answers/detail/a_id/1543 to calculate the memory
requirement for the type, size and complexity of the SANsymphony-V configuration, always
allow for plans of future growth.

Memory type
To avoid any data integrity issues while I/O is being handled by the DataCore Server’s own
Cache, ECC Memory Modules should be used.

If applicable, enable the server hardware’s Command per Clock (CPC) setting in the server
BIOS (sometimes known as CPC Mask or CPC Override).

Also see
Recommended BIOS settings on a DataCore Server
http://datacore.custhelp.com/app/answers/detail/a_id/1467

Page | 7 Best Practice Guidelines


Hardware configuration recommendations

CPU
NB. Unless specified, a ‘CPU’ in this section can refer to a single physical core, a single hyper-
threaded core or a single ‘virtual’ CPU (e.g. when running a DataCore Server inside a Virtual
Machine).

Processor type
Any x64 processor (except for Intel’s Itanium family) is supported for use in a DataCore
Server but use ‘server-class’ processors rather than those intended for ‘workstations’.

Processor speed
DataCore Software has not found any significant performance differences between the CPU
manufacturers for processors with similar architectures and frequencies when running
SANsymphony-V.

Faster (higher frequency) CPUs are always preferred over slower CPUs, as they can process
more instructions per second and so we recommend using faster cores over more (slower)
cores. Additional CPU Sockets may be necessary to use all of the available PCIe/Memory-
Sockets on the server’s motherboard. Please consult your server hardware vendor.

Power saving modes


Often known as ‘C states’; these power saving settings are configurable on most modern
server hardware and are used to put a CPU into a ‘low-power’ state if the processor is seen
to be idle.

SANsymphony-V constantly polls all Fibre Channel and iSCSI ports so as to handle I/O
requests immediately, as they arrive (in the case of Front-end or Mirror ports) or when
they need to be sent on (in the case of Mirror or Back-end). Waiting for CPUs to ‘wake’ from
one of these low-power states interferes with this polling process and significantly adds to
overall I/O latency. Wherever possible, disable all ‘low-power’ settings on all CPUs and
ensure that they are always running at the maximum possible power and speed. Please
refer to the server vendor’s own documentation on how to adjust this setting.

Intel’s Hyper-Threading and Turbo Boost technologies


Specific to Intel processors, Hyper-Threading should always be enabled where possible but
Turbo Boost will interfere with SANsymphony-V’s I/O polling operations (see the ‘Power
saving modes’ section above) and so should always be disabled.

SANsymphony-V inside a Virtual Machine


The Hypervisor Host’s own server BIOS should have these power saving modes, Hyper-
Threading and Turbo Boost set accordingly instead.

Also see
‘Recommended BIOS settings on a DataCore Server’ from the Support Website:
http://datacore.custhelp.com/app/answers/detail/a_id/1467

Page | 8 Best Practice Guidelines


Hardware configuration recommendations

The number of CPUs for a DataCore Server


NB. Unless clearly stated, a ‘CPU’ in this section refers to a single physical core, a single hyper-
threaded core or a single ‘virtual’ CPU (when running a DataCore Server inside a Virtual
Machine).

The recommended amount


This will depend on a number of factors:
 The number of Front-end, Mirror and Back-end Port connections.
 Certain SANsymphony-V features that can require additional CPU processing:
 Live Performance and Recorded Performance features
 The Replication feature

When using Hyper-Threading


Even if the DataCore Server is using Hyper-Threading, for performance considerations, the
minimum requirement of a DataCore Server to have at least 2 CPUs should always mean 2
physical CPUs in the server and not 2 ‘Hyper-threaded’ CPUs (i.e. just 1 physical CPU with
the Hyper-threaded function enabled).

Additional requirements when running on a physical server


 1 additional CPU for both the Live and Performance Recording features.
 2 additional CPUs for the Replication feature
 1 additional CPU for each fibre channel port or 3 for each pair of iSCSI ports1 that use
the DataCore driver (regardless of the port’s role), up to 10 additional CPUs2.
Example:
For a DataCore Server with 4 Fibre Channel Ports (2 Mirror Port and 2 Backend Port Roles), 2
iSCSI connections (2 Front-end Port Roles) we would recommend 10 CPUs in total – 10
physical CPUs or 5 with Hyper-Threading. If either of SANsymphony-V’s Live or Performance
Recording functionality was required then an additional CPU would be recommended.
Running Windows Hyper-V on DataCore Server
Also known as a DataCore’s ‘Virtual SAN’ solution - where only the Hosts will be running
inside Virtual Machines - additional Physical CPUs may be needed by Hyper-V to be able to
create enough virtual processors used by those Host Virtual Machines. Please consult
Microsoft’s Hyper-V documentation to see how many additional cores will be required.

1ISCSI connections generally have a much larger overhead, when compared to fibre channel ports, due to the
extra work required encapsulating and de-encapsulating of SCSI data to and from their IP packets.

2This does not mean that SANsymphony-V can only manage a maximum of 10 Fibre Channel or iSCSI ports
but that more CPUs over and above 10 will not necessarily provide any more significant performance gains
whereas using more than 10 ports will always result in increased throughput.

Page | 9 Best Practice Guidelines


Hardware configuration recommendations

Running a DataCore Server inside a virtual machine


The minimum number of [Virtual] CPUs for running a DataCore Server inside a Virtual
Machine is 2 – the same as running in a physical server - but the additional CPU calculations
used for physical DataCore Servers (see the previous section above), cannot be applied to
virtual DataCore Servers.

There will always be contention from other vCPUs on other Virtual Machines (i.e. that are
being used for Hosts) and even if the Hypervisor could ‘dedicate’ physical CPUs just for the
use of the DataCore Server’s virtual machine it would almost certainly come at a cost for
those same Host virtual machines, which would be restricted to the remaining CPUs on the
Hypervisor for all their workloads.

Even if the same number of vCPUs were to be created to match those of an equivalent
physical DataCore Server, there is still no guarantee that all of these vCPUs would be used
at the same rate and throughput as a physical DataCore Server would with physical CPUs.

We therefore recommend, for a DataCore Server running inside a virtual machine, to use
between 4 and 6 vCPUs depending on workload and the number of additional features
being used.

Also see
Installing a DataCore Server within a Virtual Machine
http://datacore.custhelp.com/app/answers/detail/a_id/1155

Power
Use redundant and uninterruptable power supplies (UPS).

Also see
UPS Support
http://www.datacore.com/SSV-Webhelp/DataCore_Servers_and_Server_Groups.htm

Page | 10 Best Practice Guidelines


Hardware configuration recommendations

RAID controllers and Storage Arrays


Configure the DataCore Server’s own Boot Disk for redundancy; use RAID 1 (usually
simpler, less complex to configure and has less overhead than RAID 5).

RAID and Storage Array controllers used to manage physical disks in Disk Pools need to be
able to handle I/O from multiple Hosts connected to the DataCore Server; so for high
performance/low latency hardware, use independent buses or backplanes designed for
‘heavy’ workloads (‘workstation’ hardware is not usually designed for such workloads).

A low-end RAID controller will deliver low-end performance. An integrated (onboard)


RAID controller that is often supplied with the DataCore Server may only be sufficient to
handle just the I/O expected for the boot drive. Controllers that have their own dedicated
CPU and cache are capable of managing much higher I/O workloads and many more
physical disks. Consult with your storage vendor about the appropriate controller to meet
your expected demands.

Also see
Storage Array Guidelines for DataCore Servers
http://datacore.custhelp.com/app/answers/detail/a_id/1302

Fibre Channel connections


Multi-ported adaptors
Adapters used for Fibre Channel connections are usually available in either single, dual or
quad-port configurations and often there is little difference in performance capabilities
when, for example, comparing two single-ported adapters with one dual-port adaptor or
two dual-port adaptors with a one quad-port adapter1.

There is however a significant redundancy implication when using single adapters with
multiple ports on them as most hardware failures usually end up affecting all ports rather
than just one. Using adapters that have a smaller number of ports on them minimizes the
risk of multiple failures happening at the same time.

Use independent switches for redundant fabrics; this also prevents ‘network-loops’ making
spanning tree protocols obsolete and simplifies the overall iSCSI implementation.

1This assumes that there are always an adequate number of ‘PCIe Lanes’ available in the PCI Slot being used for the
adapter. Please refer to your server hardware vendor’s own documentation for this.

Page | 11 Best Practice Guidelines


Hardware configuration recommendations

ISCSI connections
Multi-ported adaptors
Adapters used for iSCSI connections are usually available in either single, dual or quad-port
configurations and often there is little difference in performance capabilities when, for
example, comparing two single-ported adapters with one dual-port adaptor or two dual-
port adaptors with a one quad-port adapter1.

There is however a significant redundancy implication when using single adapters with
multiple ports on them as most hardware failures usually end up affecting all ports rather
than just one. Using adapters that have a smaller number of ports on them minimizes the
risk of multiple failures happening at the same time.

NIC teaming/ link aggregation/Spanning Tree Protocols (STP/RSTP/MSTP)


For better performance use faster, separate network adaptors and links instead of teaming
multiple, slower adaptors. For high availability use more, individual network connections
and multipath I/O software than rely on either teaming or spanning tree protocols to
manage redundancy. Just like Fibre Channel environments, use independent switches for
redundant fabrics; this also prevents ‘network-loops’ making spanning tree protocols
obsolete and simplifies the overall iSCSI implementation.

Fundamentally, SCSI load-balancing and failover functions are managed by Multipath I/O
protocols1; TCP/IP uses a completely different set of protocols for its own load-balancing
and failover functions. When SCSI commands, managed by Multipath I/O protocols but
‘carried’ by TCP/IP protocols are combined (i.e. iSCSI), then interaction between the two
protocols for the same function can lead to unexpected disconnections or even complete
connection loss.

It is not recommended to use NIC teaming with iSCSI connections as it adds more
complexity without any real gain in performance; and although teaming iSCSI Targets (i.e.
Front-end or Mirror ports) would increase the available bandwidth to that target, it still
only allows a single target I/O queue rather than, for example, two, separate NICs which
would allow two, independent target queues with the same overall bandwidth. None of the
Spanning Tree Protocols (STP, RSTP and MSTP) on networks for iSCSI connections are
recommended either as they may cause unnecessary interruptions to I/O; for example,
other, unrelated devices generating unexpected network-topology causing STP to re-route
iSCSI commands inappropriately or even blocking them completely from their intended
target.

Also see:
For a list of adapters that can be used for Fibre Channel or iSCSI connections in a DataCore
Server see ‘Qualified Hardware Components’
http://datacore.custhelp.com/app/answers/detail/a_id/1529 from the Support Website.

1Mirrored Virtual Disks that are configured to use multiple iSCSI Mirror paths on the DataCore Server are, by default,
auto-configured to be managed by Microsoft’s MPIO using the ‘Round Robin with Subset’ Policy.

Page | 12 Best Practice Guidelines


Hardware configuration recommendations

TCP/IP Network Topology


Understanding the concept of a Server Group’s ‘Controller Node’
Where a Server Group has two or more DataCore Servers in it, one of them will be
designated as the controller node for the whole group. The controller node is responsible
for managing what is displayed in the SANsymphony-V Management Console for all
DataCore Servers in the Server Group – for example; receiving status updates for the
different objects in the configuration for those other DataCore Servers (e.g. Disk Pools,
Virtual Disks and Ports etc.), including the posting of any Event messages for those same
objects within the SANsymphony-V console.

The controller node is also responsible for the management and propagation of any
configuration changes made in the SANsymphony-V Management Console regardless of
which DataCore Server’s configuration is being modified, and makes sure that all other
DataCore Servers in the Server Group always have the most recent and up-to-date changes.

The ‘election’ of which DataCore Server is to become the controller node is decided by the
SANsymphony-V software automatically and whenever;

 A DataCore Server is removed from or added to the existing Server Group


 The existing controller node is shutdown
 The existing controller node becomes ‘unreachable’ via the TCP/IP network to the
rest of the Server Group (e.g. an IP Network outage).

The decision on which DataCore Server becomes the controller node is decided
automatically between all the Servers in the Group and cannot be manually configured.

It is also important to understand that the controller node does not manage any Host,
Mirror or Back-end I/O (i.e. in-band connections) for other DataCore Servers in the Server
Group. In-band I/O is handled by each DataCore Server independently of the other Servers
in the Server Group, regardless if it is the elected controller or not. Nor does it send or
receive Replication data configured for another DataCore Server in the same Server Group,
although it will manage all Replication configuration changes and Replication status
updates regardless if it is the Source Replication Server or not.

Logging into the SANsymphony-V Management Console


Even if the SANsymphony-V Management Console is used to log in to one of the other
DataCore Servers in the group (i.e. an ‘unelected node’) that other server will still connect
directly to the controller node to make configuration changes or to display the information
in its own SANsymphony-V Management Console.

This means that all DataCore Servers in the same Server Group must have a routable
TCP/IP connection to each other so that if the controller node ‘moves’ to a different server,

Page | 13 Best Practice Guidelines


Hardware configuration recommendations

then the new controller node must also be able connect to all of the remaining DataCore
Servers in the group1.

Workstations running the SANsymphony-V Management Console


Workstations which only have the SANsymphony-V Management Console component
installed cannot become ‘controller nodes’ and never directly send or receive configuration
information for any Server Group they connect to. Just like an ‘unelected’ node the
workstation will connect to the controller node to make configuration changes or to display
the information in its own SANsymphony-V Management Console (see the previous
section).

This means that even if the workstation is on a separate network segment from the
DataCore Servers (e.g. in a different vLAN) it must still be able to send and receive TCP/IP
traffic to and from all the DataCore Servers in that vLAN.
Network connections for specific SANsymphony-V functions
SANsymphony-V’s Management Console, the VMware vCenter Integration component
Replication and Performance Recording function (when using a remote SQL Server) all use
their own separate TCP/IP session. To avoid unnecessary network congestion and delay as
well as losing more than one of these functions at once should any problems occur with one
or more network interfaces, we recommend using a separate network connection for each
function.

ISCSI and other Inter-node communications between DataCore Servers


While it is technically possible to share ISCSI I/O, Replication data and the SANsymphony-V
Management Console’s own inter-node traffic over the same TCP/IP connection, for
performance as well as losing more than one of these functions at once, we recommend
using dedicated and separate network interfaces for each iSCSI port.

NIC teaming for Inter-node communication between DataCore Servers


For all TCP/IP traffic where Multipath I/O protocols are not being used (i.e. non-iSCSI
traffic), we recommend to use NIC teaming to provide redundant network paths to other
DataCore Servers.

We also recommend that each NIC that is teamed is in its own separate network and that
‘failover’ mode is used rather than ‘load balancing’ as there is no specific performance
requirement for Inter-node communication as the TCP/IP and using ‘fail over’ mode means
that configuring and managing the network connections and switches is simpler. It also
makes troubleshooting any future connection problems easier as well.

1‘Re-election’ of the controller node takes place if the node is shutdown or if it becomes unavailable on the
network to the rest of the Server Group for any reason.

Page | 14 Best Practice Guidelines


Hardware configuration recommendations

Summary of hardware configuration recommendations


Memory
 See ‘DataCore Server Memory Considerations’ from the Support Website:
http://datacore.custhelp.com/app/answers/detail/a_id/1477
 Use ECC Memory.
 Enable CPC Settings.

CPU
 Use ‘server class’ processors.
 Always enable Hyper-Threading (if available).
 Always disable ‘Turbo Boost’ (applies to Intel CPUs only).
 Disable all CPU ‘low-power’ modes (also known as ‘C states’).

The number of CPUs for a DataCore Server


 A minimum of 2 physical CPUs (or 4 CPUs Hyper-Threaded).
 Add 1 CPU for the Live Performance and Performance Recording functions.
 Add 2 CPUs for all Replication operations.
 Add 1 CPU for each Fibre Channel port and/or 3 CPUs for each pair of iSCSI ports
 When Hyper-V is implemented on the DataCore Server additional physical CPUs will
be needed for the Hyper-V Virtual Machines configured as Hosts.

The number of CPUs for a DataCore Server running in a Virtual Machine


 A minimum of 2 vCPUs.
 An additional 2 – 4 vCPUs depending on workload and additional features being
used.

Power
 Use redundant power supplies.
 Use an uninterruptable power supply (UPS).

RAID controllers and Storage Arrays


 Use high performance/low latency hardware with independent buses or backplanes
designed for ‘heavy’ workloads (‘workstation’ hardware is not usually designed for
such workloads).
 Avoid using ‘onboard’ disk controllers for anything other than the DataCore Servers
own Boot Disks.
 Configure the DataCore Server’s Boot Disk for RAID1.
 Use a separate controller for physical disks used in either Disk Pools or for the
Replication buffer.

Page | 15 Best Practice Guidelines


Hardware configuration recommendations

 Also see ‘Storage Array Guidelines for DataCore Servers’ from the Support
Website: http://datacore.custhelp.com/app/answers/detail/a_id/1302

Fibre Channel connections


 Use many adapters with a smaller numbers of ports on them as opposed to fewer
adapters with larger numbers of ports.
 Use independent SAN switches for redundant fabrics.

ISCSI connections
 Use many adapters with a smaller numbers of ports on them as opposed to fewer
adapters with larger numbers of ports.
 Use faster, separate network adaptors instead of NIC teaming.
 Do not use NIC teaming or STP protocols with iSCSI connections. Use more,
individual network connections (with Multipath I/O software) to manage
redundancy.
 Use independent network switches for redundant iSCSI networks.

TCP/IP Network Topology


 Ensure DataCore Servers in the same Server Group all have routable TCP/IP
connections to each other at all times.
 Ensure Workstations that are only using the SANsymphony-V Management Console
has a routable TCP/IP connection to all DataCore Servers in the Server Group at all
times.
 If using SANsymphony-V’s VMware vCenter Integration component, ensure the
server running the vCenter has a routable TCP/IP connection to all DataCore
Servers in the Server Group at all times.
 Do not route ‘non-iSCSI’ TCP/IP traffic over iSCSI connections.
 Use dedicated and separate TCP/IP connections for each of the following:
 The SANsymphony-V Management Console
 Replication transfer
 Performance Recording when connecting to a remote SQL server.
 Use NIC teaming, in ‘failover’ mode, for inter-node connection redundancy for ‘non-
ISCSI’ TCP/IP traffic with separate networks for each NIC.
 Use independent network switches for redundant TCP/IP networks used for inter-
node communication between DataCore Servers.

Page | 16 Best Practice Guidelines


Software configuration
recommendations for the
DataCore Server

The Windows operating system


Always refer to SANsymphony-V’s ‘Software Requirements’ found here:
http://www.datacore.com/products/technical-information/SANsymphony-V-
Prerequisites.aspx

Use non-OEM versions of Microsoft Windows to avoid the installation of unnecessary,


third-party services that will require extra system resources and so potentially interfere
with SANsymphony-V.

Windows Update settings


Never configure the DataCore Server Windows Update settings to ‘Install updates
automatically’ if applicable to your network use the ‘Check for updates but let me choose
when to install them’ option. Also see the section ‘Windows service packs, updates, security
and hot fixes’ below.

Startup and Recovery/System Failure


The SANsymphony-V installer will by default enable the recommended setting
automatically – Kernel Memory Dump. No additional settings are required.

Virtual Memory/Page File


SANsymphony-V does not use the page file for any of its critical operations. No additional
settings are required and the default page file settings should be used.

The default size of the page file created by Windows is determined by the amount of Physical
Memory installed in the DataCore Server and the type of memory dump that is configured.
The SANsymphony-V installer will automatically change the memory dump type – usually
Automatic memory dump – to Kernel Memory Dump (see the section ‘Startup and
Recovery/System Failure’ above) to make sure that if any crash analysis is required, the
correct type of dump file will be generated by the Windows Operating System.

This may mean that a DataCore Server that has a relatively small boot disk and significantly
‘large’ amounts of physical memory installed results in a page file that fills (or nearly fills) the
boot disk after the installation of SANsymphony-V. In this case, it is still recommended to keep
the Kernel Memory Dump setting but manually enter a custom value for the page file size as
large as practically possible by unchecking the ‘Automatically manage paging file size for all
drives’ option.

Page | 17 Best Practice Guidelines


Software configuration recommendations

Power Options
Select the High Performance power plan under Control Panel\Hardware\Power Options.

Windows Error Reporting (WER) for user-mode dumps


Enable WER and configure it for ‘Full dumps’ on the DataCore Server – this is not
configured by default. User-mode dumps are especially useful to help analyze problems
that occur for any SANsymphony-V Management or Windows Console issues (i.e. non-
critical, Graphical User Interfaces) if they occur. Please refer to ‘Collecting User-Mode
Dumps’: http://msdn.microsoft.com/en-us/library/bb787181 from Microsoft’s own
website on how to set this up.

Network Time Protocol (NTP)


Synchronize the system clocks for all DataCore Servers in the same Server and Replication
Groups. Although the DataCore Server’s system clock has no effect on any Host, Mirror or
Storage I/O operations, synchronizing them with each other within a Server Group will
help avoid unexpected behavior for operations within SANsymphony-V’s that are,
potentially, time-sensitive.

Examples include:
 SANsymphony-V Management Console Tasks that use a Scheduled Time trigger.
 Continuous Data Protection’s retention time settings.
 Time-sensitive licenses (e.g. that contain fixed, expiration dates either for evaluation
or migration purposes).

It will also help stop misleading ‘Configuration Conflict’ warnings between DataCore
Servers in the same Server Group (e.g. after a planned DataCore Server reboot).

It is also recommended to synchronize all host’s system clocks as well as any SAN or
Network switch hardware clocks (if applicable) with the DataCore Servers. This can be
especially helpful when using DataCore’s VSS on a host (where backup and restore
operations may take place); but generally to help with any troubleshooting where a host’s
own system logs need to be checked against those of a DataCore Server. Many ‘SAN events’
often occur over very short periods of time (e.g. Fibre Channel or ISCSI disconnect and
reconnection issues between Hosts and DataCore Servers).

Page | 18 Best Practice Guidelines


Software configuration recommendations

Windows service packs, updates, security and hot fixes


Service packs
Service packs that have passed qualification for use with SANsymphony-V can be found in
the SANsymphony-V minimum requirements page. Never install a Windows service pack that
has not been qualified on a DataCore Server.
Updates and security fixes1
Because of the number of these that are released between full service packs, individual
Windows updates and security fixes (available via the regular Windows Update Service)
they are never qualified separately; however, DataCore Software always apply the latest
updates or security fixes to all their ‘development servers’ as soon as they become
available. This means that unless explicitly listed in the ‘Configuration Notes’ section of
the SANsymphony-V Software release notes, we recommend that all Windows updates and
security fixes that are currently available from Microsoft should be applied whenever
possible.
Software updates via Windows Update Service for third-party drivers
Sometimes, third-party vendor drivers (e.g. Fibre Channel drivers that are being used for
Back-end connections or NIC drivers used for iSCSI connections) may be distributed via the
Windows Update Service and occasionally these drivers are known to cause problems when
using SANsymphony-V. These will be listed in the ‘Known Issues - Third-party Hardware
and Software’ document.
Hotfixes
Occasionally, Microsoft will make available hotfixes that are not distributed via the normal
Windows Update Service, and sometimes these are applicable to the DataCore Server and
should be applied. A full list of all hotfixes that must be applied to a DataCore Server can be
found on the SANsymphony-V minimum requirements web page. Hotfixes that are not listed
on the SANsymphony-V minimum requirements web page should never be applied to the
DataCore Server but if there is a specific requirement to install an unlisted hotfix (e.g. by
Microsoft’s own technical support or hardware vendor) please contact DataCore Customer
Support for further advice.

Also see
SANsymphony-V’s minimum requirements
http://www.datacore.com/products/technical-information/SANsymphony-V-
Prerequisites.aspx

For the latest SANsymphony-V release notes


Software Downloads and Documentation
http://datacore.custhelp.com/app/answers/detail/a_id/1419

Known Issues - Third-party Hardware and Software


http://datacore.custhelp.com/app/answers/detail/a_id/1277

1 This also includes all Microsoft Windows’ ‘Cumulative Updates’ and ‘Update Rollups’.

Page | 19 Best Practice Guidelines


Software configuration recommendations

Upgrading to newer versions of the Windows


(including ‘R2’ versions)
Versions of Windows that have passed qualification for a specific version of SANsymphony-
V will be listed in both the SANsymphony-V Software release notes and the SANsymphony-V
minimum requirements page.

Never upgrade ‘in-place’ to a newer version of Windows operating system, for example
upgrading from Windows 2008 to Windows 2012 or upgrading from Windows 2012 to
Windows 2012R2; even if the newer version is considered qualified by DataCore the
upgrade will stop the existing SANsymphony-V installation from running. Instead of an in-
place upgrade the DataCore Server’s operating system must be installed ‘as new’.

R2 versions of a particular Windows Operating System also need to be qualified for use on
a DataCore Server. Any ‘R2’ versions of Windows that have passed qualification for a
specific version of SANsymphony-V will be listed in both the SANsymphony-V Software
release notes and the SANsymphony-V minimum requirements page.

Also see
Please refer to ‘Reinstalling the DataCore Server's Windows Operating System’ from
the Support Website: http://datacore.custhelp.com/app/answers/detail/a_id/1537 which
gives instruction on how to manage this, while at the same time retaining the existing
SANsymphony-V configuration.

SANsymphony-V’s minimum requirements


http://www.datacore.com/products/technical-information/SANsymphony-V-
Prerequisites.aspx

For the latest SANsymphony-V release notes


Software Downloads and Documentation
http://datacore.custhelp.com/app/answers/detail/a_id/1419

Upgrading the SANsymphony-V software


All the instructions and considerations for updating existing versions of SANsymphony-V
or when upgrading to a newer, major version of SANsymphony-V are documented in the
SANsymphony-V Software release notes.

The SANsymphony-V Software release notes are available either as a separate download or
come bundled with the SANsymphony-V software. For the latest SANsymphony-V release
notes see: ‘DataCore Software Downloads’ from the Support Website:
http://datacore.custhelp.com/app/answers/detail/a_id/1419

Page | 20 Best Practice Guidelines


Software configuration recommendations

Third-party software
It is recommended not to install third-party software on a DataCore Server. SANsymphony-
V requires significant amounts of system memory as well as CPU processing; it will also
prevent certain system devices (e.g. Disk devices) from being accessed by other software
components that may be installed on the DataCore Server which may lead to unexpected
errors from those other software components.

The purpose of the DataCore Server should not be forgotten and trying to run the DataCore
Server as a Domain Controller or as a Mail Server/Relay for example, as well as
SANsymphony-V, must not be done as this will affect the overall performance and stability
of the DataCore Server.

DataCore recognize that ‘certain types’ of third-party software are required to be able to
integrate the DataCore Server onto the user’s network. These include:

 Virus scanning applications


 UPS software agents
 The server vendor’s own preferred hardware and software monitoring agents

In these few cases, and as long as these applications or agents do not need exclusive access
to components that SANsymphony-V needs to function correctly (i.e. Disk, Fibre Channel or
iSCSI devices), then it is possible to run these alongside SANsymphony-V.

Always consult the third-party software vendor for any additional memory requirements
their products may require and refer to the ‘Known Issues - Third-party Hardware and
Software’ document for any potential problems with certain types of third-party software
that have already been found to cause issues or need additional configuration.

DataCore Support may ask for third-party products to be removed in order to assist with
Troubleshooting.

Also see
Known Issues - Third-party Hardware and Software
http://datacore.custhelp.com/app/answers/detail/a_id/1277

DataCore Server Memory Considerations


http://datacore.custhelp.com/app/answers/detail/a_id/1543

Changing Cache Size


http://www.datacore.com/SSV-Webhelp/Changing_Cache_Size.htm

Page | 21 Best Practice Guidelines


Software configuration recommendations

ISCSI connections
For a list of adapters that can be used for iSCSI connections in a DataCore Server see
‘Qualified Hardware Components’
http://datacore.custhelp.com/app/answers/detail/a_id/1529 from the Support Website.

For other recommendations see ‘ISCSI with NIC teaming/link aggregation/Spanning


Tree Protocols (STP/RSTP/MSTP)’ on page 12.

Also see
Known Issues - Third-party Hardware and Software
http://datacore.custhelp.com/app/answers/detail/a_id/1277

Other required TCP/IP connections


The SANsymphony-V ‘Connection Interface’ setting
Except for iSCSI I/O, all other TCP/IP traffic sent and received by a DataCore Server is
managed by the SANsymphony-V Connection Interface setting.

This includes:

 When applying SANsymphony-V configuration updates to all servers in the same


Server Group.
 Any UI updates while viewing the SANsymphony-V Management Console, including
state changes and updates for all the different objects within the configuration (e.g.
Disk Pools, Virtual Disks, Snapshots and Ports etc.).
 Configuration updates and state information to and from remote Replication Groups
 Configuration updates when using SANsymphony-V’s VMware vCenter Integration
component.
 SQL updates when using a remote SQL server for Performance Recording

The Connection Interface’s default setting (‘All’) means that SANsymphony-V will use any
available network interface on the DataCore Server for its host name resolution, this is
determined by the Windows operating system and how it has been configured and
connected to the existing network.

It is possible to change this setting, and choose an explicit network interface (i.e. IP
Address) to use for host name resolution instead, but this requires that the appropriate
network connections and routing tables have been set up correctly and are in place.
SANsymphony-V will not automatically retry other network connections if it cannot resolve
to a hostname using an explicit interface.

We recommend leaving the setting to ‘All’ and use the appropriate ‘Hosts’ file or DNS
settings to control host name resolution.

Page | 22 Best Practice Guidelines


Software configuration recommendations

Windows Hosts file / DNS settings


There is no preference for managing DataCore Server host name resolution between using
the local ‘Hosts’ file or DNS. Either method can be used.

DataCore do recommend however using Host Name resolution over just using IP addresses
as it is easier to manage any IP address changes that might occur, planned or unexpected,
by being able to simply update any ‘Hosts’ file or DNS entries instead of ‘reconfiguring’ a
Replication group or remote SQL server connection for Performance Recording (i.e.
manually disconnecting and reconnecting), which is disruptive.

When using a ‘Hosts’ file, do not add any entries for the local DataCore Server but only for
the ‘remote’ DataCore Servers and do not add multiple, different entries for the same
server (e.g. each entry has a different IP address and/or server name for the same server)
as this will cause problems when trying to (re)establish network connections.

Firewalls and network security


The SANsymphony-V software installer will automatically reconfigure the DataCore
Server’s Windows’ firewall settings to allow it to be able to communicate with other
DataCore Servers in the same Server or Replication groups. No additional action should be
required.

If using an ‘external’ firewall solution or another method to secure the IP networks


between servers then refer to the ‘Windows Security Settings Disclosure’ for the full list
of TCP Ports required by the DataCore Server and ensure that connections are allowed
through. A full list can be found in the online help; http://www.datacore.com/SSV-
Webhelp/windows_security_settings_disclosure.htm

Replication Groups
Please see the section TCP/IP connections between Replication Groups on page 28 for
Replication-specific TCP/IP configuration information.

Page | 23 Best Practice Guidelines


Software configuration recommendations

Summary of software configuration recommendations


Windows operating system
 Use non-OEM versions of Microsoft Windows.

Windows system settings


 Never configure Windows Update settings to ‘Install updates automatically’
 Use Kernel Memory Dumps for System Failure settings – later versions of
SANsymphony-V set this automatically during installation.
 Use the default Page File settings.
 Use the High Performance power plan.
 Configure Windows Error Reporting (WER) for ‘Full’ user-mode dumps
 Synchronize the system clocks for all DataCore Servers in the same Server Group,
any associated Replication Groups and Hosts.

Windows service packs, updates, security and hot fixes


 Only use service packs that have been qualified.
 Apply all updates and security fixes available from Microsoft including ‘Cumulative
Updates’ and ‘Update Rollups’.
 Do not apply any Windows updates for (third-party) drivers to Fibre Channel HBAs
using the DataCore driver.
 Always apply the Windows hot fixes listed in the SANsymphony-V minimum
requirements web page.
 Do not apply hot fixes that are not included as part of a normal Windows Update
Service and are not listed in the SANsymphony-V minimum requirements web page.

Upgrading to newer versions of the Windows Operating System


 Only use versions of the Windows operating system that have been qualified.
 Never upgrade ‘in place’ newer versions of Windows operating system.
 Only use R2 versions of Windows that have been qualified.
 Never upgrade ‘in place’ versions of Windows to its R2 equivalent.

Third-party Software
 It is recommended not to install third-party software on a DataCore Server.

Page | 24 Best Practice Guidelines


Software configuration recommendations

ISCSI connections
 See ‘ISCSI with NIC teaming/link aggregation/Spanning Tree Protocols
(STP/RSTP/MSTP)’ on page 9.

Other required TCP/IP connections


 Leave the DataCore Server’s Connection Interface setting as default (‘ALL’).
 Use either ‘Hosts’ or DNS settings to control all host name resolution for the
DataCore Server.
 Use a managed ‘Hosts’ file (or DNS) instead of just using IP addresses.
 Never install a Windows service pack that has not been qualified.
 Any Windows updates and security fixes that are currently available from
Microsoft’s Windows Update Service should be applied whenever possible.
 For Firewalls and other network security requirements please refer to ‘Windows
Security Settings Disclosure’ via the online help: http://www.datacore.com/SSV-
Webhelp/windows_security_settings_disclosure.htm

Page | 25 Best Practice Guidelines


Replication recommendations
The Replication buffer
The location of the Replication buffer will determine the speed that the replication process
can perform its three basic operations:

 Creating the Replication data (write).


 Sending the Replication data to the remote DataCore Server (read).
 Deleting the Replication data (write) from the buffer once it has been processed
successfully by the remote DataCore Server.

Therefore, the disk device that holds the Replication buffer should be able to manage at
least 2x the write throughput for all replicated Virtual Disks combined together.

If the disk device used to hold the Replication buffer is too slow it may not be able empty
fast enough (so as to be able to accommodate new Replication data). This will result in a
full buffer and an overall increase in the replication time lag (or latency) on the Replication
Source DataCore Server.

A full Replication buffer will prevent future Replication checkpoint markers from being
created until there is enough available space in the buffer and in extreme cases may also
affect overall Host performance for any Virtual Disks served to it that are being replicated.

Using a dedicated storage controller for the physical disk(s) used to create the Windows
disk device where the buffer is be located will give the best possible throughput for the
replication process. Do not use the DataCore Server’s own boot disk so as to not cause
contentions for space and disk access.

It is technically possible to ‘loopback’ a Virtual Disk to the DataCore Server as a local SCSI
disk device to then use as the Replication buffer’s location. This is not recommended as
apart from the extra storage capacity that this will require, there may be unexpected
behavior when the SANsymphony-V software is ‘stopped’ (e.g. for maintenance) as the
Virtual Disk being used would suddenly no longer be available to the Replication process,
potentially corrupting the replication data that was being flushed while the SANsymphony-
V Software was stopping.

Creating a mirror from the Virtual Disk being ‘loop-backed’ may be considered a possible
solution to this but in the case where the mirrored Virtual Disk used for the Replication
buffer also has to handle a synchronous mirrored Virtual Disk resynchronization (e.g. after
an unexpected shutdown of the DataCore mirror partner) the additional reads and writes
used by the mirror synchronizing process as well as not using DataCore Server’s own write

Page | 26 Best Practice Guidelines


Replication recommendations

caching (while the mirror is not healthy) will significantly reduce the overall speed of the
Replication buffer so this configuration is not recommended either.

Which RAID do I use for the Replication buffer?


While performance of the Replication buffer is important, a balance may need to be struck
between protecting the data held in Replication buffer (i.e. RAID 1 or 5) and improving
read/write performance (e.g. RAID 0 or even no RAID at all!). It is therefore difficult to give
a specific recommendation here as this will depend on the properties of the physical disks
and the RAID controller being used to create the Windows disk device used to hold the
Replication buffer as to whether the gain in read/write performance by not (for example)
using RAID 5, can be considered insignificant or not. Comparative testing is strongly
advised.

The size of Replication buffer


The size of the buffer will depend on the following:

 The amount of write bytes that are sent to all Virtual Disks configured for replication
 The speed of the Windows disk device that the buffer is using
 The speed of the Replication Network Link (see the next section) to the Replication
Group

Situations when the Replication Link is ‘down’, and where the replication process will
continue to create and store replication data in the buffer until the link is re-established,
needs to be considered too. For example, plan for an ‘acceptable’ amount of network down-
time the Replication Group (e.g. 24 hours) and knowing (even approximately) how much
replication data could be generated in that time would allow for an appropriate sizing to
prevent the Replication ‘In log’ state.

Planning for future growth of the amount of replication data must also be considered.
Create GPT type Windows disk devices and using Dynamic Disks will give the most
flexibility in that it should be trivial to expand an existing NTFS partition used for the
location of an existing Replication buffer if required.

Be aware that determining the optimum size of the buffer for a particular configuration is
not always trivial and may take a few attempts before it is known.

Page | 27 Best Practice Guidelines


Replication recommendations

The Replication network links


TCP/IP connections between Replication Groups
A remote Replication Server will receive two different types of TCP/IP traffic from the local
DataCore Server Group:

 All Replication configuration changes & updates made via the SANsymphony-V
Management Console. This includes Virtual Disk states; all Replication performance
metrics (e.g. transfer speeds and the number of files left to transfer). This TCP/IP
traffic is always sent to and from the ‘controller nodes’ of both the Source and
Destination Replication Groups1.

 The Replication data between the Source and Destination Replication Groups. This
TCP/IP traffic always sent from the DataCore Server selected when the Virtual Disk
was configured for Replication on the Source Server Group regardless which
DataCore Server is the ‘controller node’.

In both cases, the DataCore Server’s own Connection Interface setting is still used.

This means that if the ‘controller node’ is not the same DataCore Server that is configured
for a particular Virtual Disk’s Replication, then the two different TCP/IP traffic streams (i.e.
Configuration changes & updates and Replication data) will be split between two different
DataCore Servers on the Source with each DataCore Server using their own Connection
Interface setting.

As the elected ‘controller node’ can potentially be any DataCore Server in the same Server
Group it is very important to make sure that all DataCore Servers in the same Local
Replication Group can route all TCP/IP traffic to all DataCore Servers in the Remote
Replication Group and vice versa.

Also see
Windows Security Settings Disclosure
http://www.datacore.com/SSV-Webhelp/windows_security_settings_disclosure.htm

Assigning a Server Group Connection Interface


http://www.datacore.com/SSV-Webhelp/Establishing_Server_Groups.htm

Replication Operations
http://www.datacore.com/SSV-Webhelp/Replication_Operations.htm

1 See the section ‘TCP/IP Network Topology – Understanding the concept of a Server Group’s ‘controller node’ on page 11
for more explanation.

Page | 28 Best Practice Guidelines


Replication recommendations

Multiple network interfaces for multiple Replication Groups


For Server Groups that have more than one Replication Group configured, using a separate
and dedicated network interface for each connection to each of the Replication Groups may
help improve overall replication transfer speeds (from the single buffer on the local
DataCore Server).

Network link speed


The speed of the network link will affect how fast the replication process will be able to
send the replication data to the remote DataCore Server and therefore influence how fast
the buffer can empty. Therefore the link speed will have a direct effect on the sizing of the
Replication buffer. For optimum network bandwidth usage the network link speed should
be at least half the speed of the read access speed of the buffer.

WAN/LAN optimization
The replication process does not have any specific WAN or LAN optimization capabilities
but can be used alongside any third-party solutions to help improve the overall replication
transfer rates between the local and remote DataCore Servers.

Other Replication considerations


Replication Transfer Priorities
Use the Replication Transfer Priorities setting - configured as part of a Virtual Disk’s storage
profile - to ensure the Replication data for the most important Virtual Disks are sent more
quickly than others with in the same Server Group.

See the section ‘Replication Transfer Priority’ from the online help for more information:
http://www.datacore.com/SSV-Webhelp/Replication_Operations.htm

Replication Data Compression


When enabled, the data is not compressed while it is in the buffer but within the TCP/IP
stream as it is being sent to the remote DataCore Server. This will usually help increase
potential throughput sent to the remote DataCore Server. It is difficult to know for certain if
the extra time needed for the data to be compressed (and then decompressed on the
remote DataCore Server) will result in quicker replication transfers compared to no Data
Compression being used at all.

A simple, comparison test should be made after a reasonable period of time by disabling
compression temporarily and observing what (if any) differences there are in transfer rates
or replication time lags.

See the section ‘Enabling/disabling data compression during data transfer’ from the
online help for more information: http://www.datacore.com/SSV-
Webhelp/Configuring_Server_Groups_for_Replication.htm

Page | 29 Best Practice Guidelines


Replication recommendations

Any third-party, network-based compression tool can be used to replace or add additional
compression functionality between the links used to transfer the replication data between
the local and remote DataCore Servers, again comparative testing is advised.

Avoid unnecessary ‘large’ write operations


Some special, Host-specific operations – for example; Live Migration, vMotion or host-based
snapshots - may generate significant ‘bursts’ of write I/O that may, in turn, unexpectedly fill
the Replication buffer adding to the overall replication time lag.

A Host Operating System’s page (or swap) file can also generate ‘large’ amount of extra,
unneeded replication data - which will not be useful after it has been replicated to the
remote DataCore Server.

Use separate Virtual Disks if these operations are not required to be replicated.

Some third-party backup tools may ‘write’ to any file that they have just backed up (for
example to set the ‘archive bit’ on a file it has backed up) and this too can potentially
generate extra amounts of replication data.

Use time-stamp based backups to avoid this.

Encryption
The replication process does not encrypt any of the data sent to the remote DataCore
Server, but third-party encryption tools can be used to secure the links used for the
transfer of the replication data between the DataCore Servers. DataCore do not have any
recommendations for encryption other than to mention that the extra time required for the
encryption/decryption process of the replication data might add to the overall replication
time lag. Comparative testing is advised.

Anti-Virus software
The data created and stored in the replication buffer cannot be used to ‘infect’ the DataCore
Server’s own Operating System. Using Anti-Virus software to check the replication data is
therefore unnecessary and will just increase the overall replication transfer time as the
files are scanned delaying their sending and removal from the buffer as well as adding to
the number of reads to the buffer disk.

Also see
Replication - How it works
http://datacore.custhelp.com/app/answers/detail/a_id/1477
And
http://www.datacore.com/SSV-Webhelp/Replication.htm

Configuring Server Groups for Replication


http://www.datacore.com/SSV-Webhelp/Configuring_Server_Groups_for_Replication.htm

Page | 30 Best Practice Guidelines


Replication recommendations

Summary of Replication recommendations


The Buffer
 Use as fast a disk as possible, (for example RAID 1 or No Raid) for the best
read/write performance; only use RAID protection if required and if the loss of
overall performance for that protection is negligible. It should be capable of
handling at least twice the write I/O throughput for all replicated Virtual Disks
combined. Comparative testing between using RAID 1 and RAID 5 is advised.
 Use a dedicated SCSI controller for the best possible throughput and do not use the
DataCore Server’s own boot disk.
 Do not use a SANsymphony-V Virtual Disk as a location for a Replication buffer.
 In the case of unexpected Replication Network Link connection problems, size the
buffer accordingly to take into account a given period of time that the network may
be unavailable without the buffer being able to fill up.
 Use a GPT partition style and Dynamic Disks to allow for expansion of the
Replication buffer in the future.

The Replication network link


 Each DataCore Servers in both the ‘local’ Server Group and the ‘remote’ Replication
Group must each have its own routable TCP/IP connection to and from other.
 For optimum network bandwidth usage the network link speed should be at least
half the speed of the read access speed of the buffer.
 Use a dedicated network interface for each Replication Group to get the maximum
possible replication transfer speeds to and from the DataCore Server(s).
 Enable Compression to improve overall replication transfer rates.

Other Replication considerations


 Use SANsymphony-V’s Replication Transfer Priorities setting to prioritize those
Virtual Disks that need to be able to replicate more quickly than others.
 Use timestamp-based backups on Host files that reside on a Virtual Disk to avoid
additional replication data being created by using a file’s ‘archive-bit’ instead.
 Host operations that generate large bursts of writes - such as Live Migration,
vMotion, host-based snapshots or even page/swap files for example –that are not
required to be replicated should use separate, un-replicated Virtual Disks.
 Exclude the replication buffer from any Anti-Virus software checks.

Page | 31 Best Practice Guidelines


Replication recommendations

Previous Changes
August 2015
Initial Publication containing the sections:
High level design objectives
Hardware configuration recommendations for the DataCore Server
Software configuration recommendations for the DataCore Server

Page | 32 Best Practice Guidelines


COPYRIGHT

Copyright © 2015 by DataCore Software Corporation. All rights reserved.

DataCore, the DataCore logo and SANsymphony are trademarks of DataCore Software
Corporation. Other DataCore product or service names or logos referenced herein are
trademarks of DataCore Software Corporation. All other products, services and company
names mentioned herein may be trademarks of their respective owners.

ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE


ACCURATE, IT IS PROVIDED “AS IS” AND USERS MUST TAKE ALL RESPONSIBILITY FOR
THE USE OR APPLICATI/ON OF THE PRODUCTS DESCRIBED AND THE INFORMATI/ON
CONTAINED IN THIS DOCUMENT. NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY
EXPRESS OR IMPLIED REPRESENTATI/ON, WARRANTY OR ENDORSEMENT
REGARDING, AND SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATI/ON OF
ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATI/ON
REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY
IMPLIED WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT, FITNESS
FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND
LIABILITY ARE HEREBY DISCLAIMED TO THE FULLEST EXTENT
PERMITTED BY LAW.

No part of this document may be copied, reproduced,


translated or reduced to any electronic medium or
machine-readable form without the prior written
consent of DataCore Software Corporation

Вам также может понравиться