Вы находитесь на странице: 1из 96

FalconStor Network Storage Server Virtual Appliance (NSSVA) User Guide

FalconStor Software, Inc. 2 Huntington Quadrangle, Suite 2S01 Melville, NY 11747 Phone: 631-777-5188 Fax: 631-501-7633 Web site: www.falconstor.com

Copyright 2001-2010 FalconStor Software. All Rights Reserved. FalconStor Software, IPStor, TimeView, and TimeMark are either registered trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries. Linux is a registered trademark of Linus Torvalds. Windows is a registered trademark of Microsoft Corporation. All other brand and product names are trademarks or registered trademarks of their respective owners. FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The reader should in all cases consult FalconStor Software to determine whether any such changes have been made. This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2 ;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending."
51010

NSS Virtual Appliance User Guide

Contents
Introduction
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Hardware/software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 NSSVA Specification and requirement summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Virtual machine configuration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Supported Disk Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 NSSVA Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 ESX server deployment planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 About this document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Knowledge requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Install NSS Virtual Appliance


Installation for VMware virtual infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Installing NSSVA via the installation script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Installing NSSVA via Virtual Appliance Import from a downloaded zip file . . . . . . .13 Installing the Snapshot Director on the ESX console server . . . . . . . . . . . . . . . . . .14 Installing SAN client software on virtual host machines . . . . . . . . . . . . . . . . . . . . .14 Installing Snapshot Agents on virtual host machines . . . . . . . . . . . . . . . . . . . . . . .15 NSS Virtual Appliance configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Basic system environment configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

Configuration and Management


Installing and using the FalconStor Management console . . . . . . . . . . . . . . . . . . . . . . .19 Account Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Connect to the virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Add License Keycode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 Register keycodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 Add virtual disks for data storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Add the new virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23 Add the new device to the storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Create a SAN Client for VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Assign the SAN resource to VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . .27 Assign the same SAN resource to two VMware ESX server . . . . . . . . . . . . . . . . . .29 Enable VMDirectPath I/O in vSphere v4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Enable the VMDirectPath Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Configure a virtual machine to use a passthrough VMDirectPath PCI device/port .32 Modify a FalconStor Virtual Appliance (for ESX 3.5) to load VMware Drivers . . . . .35 Modify a FalconStor Virtual Appliance (for ESX 3.5) to load the NIC/HBA driver . .35

NSS Virtual Appliance User Guide

Contents

High Availability
FalconStor NSS Virtual Appliance High Availability (HA) solution . . . . . . . . . . . . . . . . .38 Configuring the NSS Virtual Appliance Cross-Mirror failover . . . . . . . . . . . . . . . . . . . . .39 Power Control for VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 Launching the power control utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43 Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45 After failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 Recover from a cross-mirror failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 Re-synchronize Cross mirror on a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . .48 Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . . .54 Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .54 Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . . .55 Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

Replication
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57 Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67 Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69 Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70 Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71 Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72 Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . . .72 Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73 Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . . .74 Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74 Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75 Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76 Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77
NSS Virtual Appliance User Guide ii

Contents

Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Expand the size of the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication with other NSS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78 Replication and Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79

Troubleshooting
NSS Virtual Appliance settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Checking the resource reservation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80 Checking the virtual Network Adapter setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81 Optimizing SCSI software initiator performance . . . . . . . . . . . . . . . . . . . . . . . . . . .82 Optimizing performance when using a virtual disk on a NSSVA for iSCSI devices .82 Resolving slow performance on the Dell PERC6i . . . . . . . . . . . . . . . . . . . . . . . . . .82 Cross-mirror failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83

Appendix A - Checklist
A. VMware ESX Server system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .84 B. NSS Virtual Appliance system information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86 C. Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88 D. Storage Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89

Index

NSS Virtual Appliance User Guide

iii

NSS Virtual Appliance User Guide

Introduction
FalconStor Network Storage Server Virtual Appliance (NSSVA) for VMware Infrastructure 3 and 4 is a pre-configured, production-ready virtual machine that delivers high speed iSCSI and virtualization storage service through VMwares virtual appliance architecture. It provides enterprise-class data protection features including application-aware, space-efficient snapshot technology that can maintain up to 64 point-in-time copies of each volume. The FalconStor NSS Virtual Appliance can also be used as a costeffective virtual iSCSI SAN solution by creating a virtual SAN on a VMware ESX server and turning internal disk resources into a shareable pool of storage. If the FalconStor NSS Virtual Appliance is deployed on a single VMware ESX server, that server can share storage resources with other servers in the environment. This is accomplished without the need for external storage arrays, SAN switches, or costly host bus adapters (HBA). Internal data drives are detected by the software and incorporated into the management console through a simple GUI. At that point, storage can be provisioned and securely allocated via the iSCSI protocol, which operates over standard Ethernet cabling. To enable high availability (HA), the FalconStor NSS Virtual Appliance can be deployed on two VMware ESX servers that can share storage with each other as well as additional VMware ESX servers. In this model, each NSS Virtual Appliance maintains mirrored data from the other server. If one of the servers is lost, all virtual machines that were running on the failed server can restart using the storage resources of the remaining server. Downtime is kept to a minimum as applications are quickly brought back online. Thin Provisioning technology and space-efficient snapshots further decrease costs by minimizing consumption of physical storage resources. The Thin Replication feature minimizes bandwidth utilization by sending only unique data blocks over the wire. Built-in compression and encryption reduce bandwidth consumption and enhance security, without requiring specialized network devices to connect remote locations with the data center or DR site. Tape backup for multiple remote offices can be consolidated to a central site, eliminating the need for distributed tape autoloaders and associated management headaches and overhead. NSSVA is supported under the VMware Ready program for virtual appliances. It is a TOTALLY Open solution for VMware Infrastructure that enables a virtual SAN (vSAN) service directly on VMware ESX servers. The local direct attached storage becomes a shared SAN for all ESX Servers on the iSCSI network. The ability to convert direct attached storage within an ESX Server opens the door for small to medium enterprises to initially deploy VMware Infrastructure without the added expense of a dedicated SAN appliance and to enjoy the broader benefits of VMwares business continuity and resource management feature.

NSS Virtual Appliance User Guide

Introduction

Additionally, most businesses, small and large, seek out VMwares advanced enterprise features VMware VMotion (live migration of a running virtual machine from one ESX server to another), HA (High Availability auto restart of virtual machines), and DRS (Distributed Resource Scheduling moving virtual machine workloads based on preset metrics or schedules).

Components
NSSVA consists of the following components: Component NSS Virtual Appliance Description A virtual machine that runs FalconStor NSS software. This virtual appliance delivers high speed iSCSI and virtualization storage service through VMwares virtual appliance architecture: a plug-and-play VMware virtual machine running on VMware ESX server. NSSVA is a TOTALLY Open virtual storage array and a VMware Certified Virtual Appliance. The Windows management console that can be installed anywhere there is IP connectivity to the NSS Virtual Appliance. Collaborate with Windows NTFS volumes and applications in order to guarantee that snapshots are taken with full application level integrity for fastest possible recovery. A full suite of Snapshot Agents is available so that each snapshot can later be used without lengthy chkdsk and database/email consistency repairs. Snapshot Agents are available for Oracle, Microsoft Exchange, Lotus Notes/Domino, Microsoft SQL Server, IBM DB2 Universal Database, Sybase and many other applications. Host-side software that helps you register host machines with the NSS virtual appliance.

FalconStor Management Console Snapshot Agents

SAN Disk Manager

Benefits
High Availability
Using FalconStors NSSVA virtual SAN appliances in an Active/Passive configuration enables VMware users to deploy a highly available shared storage environment that takes advantage of VMware Infrastructure enterprise features for better manageability and resiliency. FalconStor NSSVA highly available virtual storage configuration supports iSCSI target failover between NSSVA virtual appliances installed on the initial two ESX Servers which is required to gain VMware HA and DRS features. VMware VMotion support requires only a single NSSVA on one ESX Server in an ESX Server cluster.

NSS Virtual Appliance User Guide

Introduction

MicroScan Replication
In the branch or remote office, VMware Infrastructure and FalconStor NSSVA can help to reduce operational costs through a server and storage consolidation to a central data center. FalconStors MicroScan Replication option with built-in WAN acceleration completes remote office server and storage consolidation IT strategies by providing highly efficient replication of branch or remote office data to your central data center. MicroScan Replication also reduces the amount of information replicated by ensuring that data already sent to the central data center is not sent more than once, thereby reducing traffic on the WAN.

VMware Site Recovery Manager (SRM) support


FalconStor NSSVA also supports VMware Site Recovery Manager (SRM) through integration with FalconStor MicroScan replication. FalconStor NSSVA, combined with VMware Infrastructure, provides a complete highly available virtualization solution for most small to medium enterprise as well as large enterprise environments that are focused on consolidation and virtualization for remote office branch offices.

Cross-Mirror failover
FalconStor NSSVA supports Cross-Mirror failover, a non-shared storage failover option that provides high availability without the need for shared storage. Used with virtual appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. This option removes the requirements of shared storage between two partner storage server nodes and allows swapping data functions from a failed virtual disk on the primary server to the mirrored virtual disk on the secondary server. The disks are swapped back once the problem is resolved.

Microsoft VSS compliant


FalconStor NSSVA supports Microsoft Windows Volume Shadow copy Service (VSS), which provides the backup infrastructure and a mechanism for creating consistent point-in-time copies of data known as shadow copies.

NSS Virtual Appliance User Guide

Introduction

Three Versions
NSSVA is available in the following three versions: NSSVA Standard Edition Includes two TB of storage (upgradable to four TB). Supports up to 10 clients. Includes the following client application support: VMware Application Snapshot Director Storage Replication Adapters for VMware SRM* SAN Client Application Snapshot Agent *Supported in pilot environments only. NSSVA Standard Edition trial NSSVA lite (free iSCSI SAN) Edition Includes all of the features of the standard edition for a 30 day period. Can be upgraded to the standard edition Does not include high availability, mirror, or replication. Five client limit. Two TB storage capacity. Can be upgraded to the standard edition. Does not include the following client application support: VMware Application Snapshot Director Storage Replication Adapters for VMware SRM SAN Client Application Snapshot Agent

For advanced configuration of high availability, refer to the documentation link that is included in your registration E-mail.

NSS Virtual Appliance User Guide

Introduction

Hardware/software requirements
Component NSS Virtual Appliance Requirement NSSVA supports the following VMware ESX Server platform: VMware ESX Server 3.5 Update 5 VMware ESXi 3.5 Update 5 VMware ESX Server 4.0 Update 1 VMware ESXi 4.0 Update 1 All necessary critical patches for VMware ESX server platforms are available on the VMware download patches web site: http:// support.vmware.com/selfsupport/download/. A virtual or physical machine running any version of Microsoft Windows that supports the Java 2 Runtime Environment (JRE). FalconStor Virtual Appliances for VMware are supported only on VMware certified server hardware. To ensure system compatibility and stability, refer to the online compatibility guide http://www.vmware.com/resources/compatibility/ search.php?action=base&deviceCategory=server. To download the Systems Compatibility Guides: For ESX Server 3.5 and ESX Server 3i, go to https://www.vmware.com/ resources/techresources/1032 For maximum virtualization and iSCSI SAN service, NSSVA uses 64-bit system architecture. To verify 64-bit virtual machine support, download the VMware utility below and execute it on the ESX server to see if the CPU supports 64-bit: http://downloads.vmware.com/d/details/processor_check_5_5_dt/ dCpiQGhkYmRAZQ==

FalconStor Management Console VMware ESX Server hardware Compatibility

64-bit processor

NSS Virtual Appliance User Guide

Introduction

Component Cross-mirror failover

Requirement Each server must have identical internal storage. Each server must have at least two network ports (one for the required crossover cable). The network ports must be on the same subnet. Only one dedicated cross-mirror IP address is allowed for the mirror. The IP address must be 192.168.n.n. Only virtual devices can be mirrored. Service-enabled devices and system disks cannot be mirrored. The number of physical disks on each machine must match and the disks must have matching ACSLs (adapter, channel, SCSI ID, LUN). When failover occurs, both servers may have partial storage. To prevent a possible dual mount situation, we strongly recommend that you use a hardware power controller, such as IPMI. Refer to Power Control for VMware ESX server on page 42 for more information. Prior to configuration, virtual resources can exist on the primary server as long as the identical ACSL is unassigned or unowned by the secondary server. After configuration, pre-existing virtual resources will not have a mirror. You will need to use the Verify & Repair option to create the mirror.

BIOS VT Support

The VMware ESX server must be able to support hardware virtualization for the 64-bit virtual machine. To verify BIOS VT support: Link to VMware knowledgeBase to run the esx command. http://kb.vmware.com/selfservice/microsites/ search.do?language=en_US&cmd=displayKC&externalId=1011712 NSSVA reserves NSS resources of 2000MHz for storage virtualization, iSCSI service, Snapshot, and replication processes, ensuring sufficient resources for the VMware ESX server and multiple virtual machines. The specifications are: Two Dual-core 1.5 GHz 64-bit process One Quad-core 2.0 GHz 64-bit process NSSVA reserves 2 GB of memory resources for storage virtualization, iSCSI service, Snapshot, and replication processes, ensuring sufficient resources for the VMware ESX server and multiple virtual machines. The specifications are: 500MB for VMware ESX server system 2 GB for FalconStor NSS Virtual Appliance More memory for the other virtual machines running on the same ESX server

2000 MHz CPU resource reservation

2 GB Memory resource reservation

NSS Virtual Appliance User Guide

Introduction

Component Storage

Requirement NSSVA supports up to 2TB of storage for iSCSI storage provisioning and snapshot data. Additional storage can be added in 1 TB increments. Storage is allocated from the standard VMware virtual disk on the local storage or the raw device disk on SAN storage. NSSVA also supports Storage Pools, into which you can add different sized virtual disks. The system allocates resources for storage provisioning or snapshots on demand. NSSVA is pre-configured with two virtual network adapters that manage your multiple path iSCSI connection or dedicated cross-mirror link. For the best network performance, the ESX server needs two physical network adapters for one-to-one mapping to the independent virtual switches and the virtual network adapters of NSSVA. In addition, the ESX server may need extra physical network adapters for Virtual infrastructure management, VMware VMotion, or physical network redundancy. Two physical network adapters for one-to-one virtual network mapping to FalconStor NSSVA. Optional physical network adapters links to one virtual switch for physical network adapters redundancy. Optional physical network adapters for virtual center management though the independent network. Optional physical network adapters for VMotion process though the independent network.

Network Adapter

NSS Virtual Appliance User Guide

Introduction

NSSVA Specification and requirement summary


Virtual machine configuration:
Spec CPU Memory* Disk Network VM Configuration Two virtual processors 2 GB 28 GB Two virtual network adapters Reservation 2000 MHz 2GB

Minimum ESX server hardware requirements Spec CPU ESX Server Configuration Two Dual-core 1.5 GHz 64-bit processor OR One Quad-core 2.0 GHz 64-bit processor Using ESX requires specific hardware and system resources. If you are using ESX 4, refer to the VMware Online Library for specific ESX hardware requirements: http://pubs.vmware.com/ vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/ c_esx_hw.html 2 GB Up to 4 TB free storage space Two physical network adapters

Memory* Disk Network

Note: *Memory requirements may vary depending upon your usage. Recovering a volume using more than 300 GB of TimeMark data may require additional RAM.

NSS Virtual Appliance User Guide

Introduction

Supported Disk Configuration


Disk Type Local Disks Vm Configuration Format using VMFS (or use an existing VMFS volume). Create .vmdk file to provision to NSSVA. Virtualize the disk and create an NSS SAN resource (do not use SED). Once the ESX servers detect the NSS disk over iSCSI, you can use it as a RAW disk RDM (virtual or physical) or as a VMFS volume (recommended). Format the SAN disk using VMFS (or use an existing VMFS volume on the SAN). Create a .vmfs file to provision to NSSVA. Virtualize the disk and create an NSS SAN resource (do not use SED). Once the ESX servers detect the NSS disk over iSCSI, you can use it as a RAW disk RDM (virtual or physical) or as a VMFS volume (recommended). Create Raw Device Mapping (RDM) in virtual mode to provision the NSSVA. Virtualize the disk and create an NSS SAN resource (do not use SED). Once the ESX servers detect the NSS disk over iSCSI, you can use it as a RAW disk RDM (virtual or physical) or as a VMFS volume (recommended). Create Raw Device Mapping (RDM) in virtual mode to provision the NSSVA. Reserve the disk for Service enabled use and create an NSS SED resource. Do not preserve the device Inquiry String so that the disk displays as a FalconStor disk instead of a VMware virtual disk later. Once the ESX servers detect the NSS disk, you must use it as a RAW disk RDM (virtual or physical). Do not use VMFS format in this configuration.

SAN Disks*

Raw SAN Disks

Note: *Assigning an iSCSI ARRAYs LUN directly to a NSSVAs iSCSI Initiator is not supported. The physical iSCSI arrays LUN must be provisioned to the ESX servers iSCSI Initiator and disks then configured per the instructions described in the guide.

NSS Virtual Appliance User Guide

Introduction

NSSVA Configuration
ESX server deployment planning
The FalconStor NSS Virtual Appliance is a pre-configured and ready-to-run solution installed on a dedicated ESX server in order to function as a storage server. NSSVA can also be installed on a ESX server that runs other virtual machines. To deliver high availability storage service, NSSVA can be installed on a second VMware ESX server that will function as a standby storage server with redundant cross-mirror storage. Dedicated NSSVA When NSSVA is installed on a dedicated ESX server no other virtual machine runs on the system. Dedicated High Availability NSSVA When NSSVA is installed on two dedicated ESX servers; they can be configured for Active/Passive high availability. Shared NSSVA When NSSVA is installed on an ESX server on which other virtual machines are installed or will be installed, NSSVA will share the CPU and memory resources with other virtual machines and still offer storage services for the other virtual machines on the same or the other ESX servers. Shared HA NSSVA When NSSVA is installed on two ESX servers on which other virtual machines are installed or will be installed, NSSVA will share the CPU and memory resources with other virtual machines. The two NSSVAs can be configured for Active/Passive high availability.

About this document


This document provides step-by-step procedures for installing and using the NSSVA in a VMware ESX 3.5, 4, ESXi 3.5, and 4 environment. The following topics will be covered in this document: Installation on the virtual appliance Configuration of the virtual appliance Host-side software installation Protection of servers High availability Replicating data for disaster recovery purposes

NSS Virtual Appliance User Guide

10

Introduction

Knowledge requirements
Individuals deploying NSSVA should have administrator level experience with VMware ESX and will need to know how to perform the following tasks: Create a new virtual machine from an existing disk Add new disks to an existing virtual machine as Virtual Disks or Mapped Raw Disks Troubleshoot virtual machine networks and adapters

Although not required, it is also helpful to have knowledge about the technologies listed below: Linux iSCSI TCP/IP

NSS Virtual Appliance User Guide

11

NSS Virtual Appliance User Guide

Install NSS Virtual Appliance


Installation for VMware virtual infrastructure
The FalconStor NSS Virtual Appliance supports generic VMware ESX server 3.5, ESXi 3.5, ESX server 4, and ESXi 4. You can choose one of the easy installation methods according to your ESX server version.

Installation script for VMware ESX server 3.5, and 4 The generic VMware ESX server provides the local console and SSH remote console connection for management. You can launch the NSSVA installation script on a local or remote console to install NSSVA.

Virtual Appliance Import for VMware ESX server 3.5, ESXi 3.5, ESX server 4, and ESXi 4 The latest VMware ESX server 4 and hypervisor ESXi supports virtual appliance import execution from a VMware Infrastructure Client. If the VMware ESXi server does not support local and remote console, you will only be able to use the virtual appliance import method to install the NSSVA into the system.

Before installation, you must ensure that the CPU supports 64-bit operating systems and is compatible with the VMware ESX system and the system BIOS can support Virtualization Technology (VT). To verify 64-bit virtual machine support: Go to http://downloads.vmware.com/d/ details/processor_check_5_5_dt/dCpiQGhkYmRAZQ== To verify BIOS VT support, go to http://kb.vmware.com/selfservice/microsites/ search.do?language=en_US&cmd=displayKC&externalId=1011712

Installing NSSVA via the installation script


To launch the NSSVA installation script on the ESX server console, log into the console with root privileges and follow the instructions below to complete the installation. 1. Upload the FalconStor-NSSVA.zip file to the VMware ESX server/root folder using the SCP tool. 2. Execute the unzip command to extract the package to the FalconStor-NSSVA folder. 3. Start the NSS Virtual Appliance installation by executing the following command: ./FalconStor-VA/nssinstall from the unzip path. The installation script performs several system checks and continues installing if the following requirements are met:

NSS Virtual Appliance User Guide

12

Install NSS Virtual Appliance

System memory on the ESX server must be at least 2 GB. The ESX server must be a supported 64bit virtual machine. The ESX server must have the BIOS VT function enabled. 4. Enter the number of the VMFS volume where you will be installing the NSS Virtual Appliance system. The installation script copies the system image source and extracts it to the specified volume. The NSS Virtual Appliance is then registered onto the ESX system. Note: For NSSVA Lite: While extracting the NSS virtual appliance system, you will be asked to enter your login credentials for the target. (i.e. Please enter login information for target vi://127.0.0.1)

Installing NSSVA via Virtual Appliance Import from a downloaded zip file
1. On the client machine, unzip the NSSVA.zip file and extract the package to any folder. For example, create a folder called FalconStor-NSSVA. 2. If not already active, launch the VMware Infrastructure/vSphere Client and connect to the ESX server with root privileges. 3. Select File --> Virtual Appliance -->Import (VI client)/ Deploy OVF template (vSphere Client). 4. For the Import Location of the Import Virtual Appliance Wizard, click the Browse button on the Import from file option. Then select the folder to which you extracted the package (i.e. the FalconStor-NSSVA folder), expand the folder, and select the file: FalconStor-NSSVA.ovf in the FalconStor-VA folder. The Virtual Appliance Details checks the virtual appliance information for FalconStor NSSVA. 5. Click Next to continue the import. The Name and Location displays the default appliance name: FalconStorNSSVA. You can change the name of the virtual machine. This change will not be applied into the actual appliance name. 6. On the Datastore list, click on the datastore containing at least 26 GB of space for the NSSVA system Import. 7. For Network Mapping, select the virtual machine network of the ESX server that the NSSVA virtual Ethernet adapter will link to. 8. On the Ready to Complete screen, review all settings and click Finish to start the virtual appliance import task. The virtual appliance import status window displays the completion percentage. It usually takes five to 10 minutes to complete this task.

NSS Virtual Appliance User Guide

13

Install NSS Virtual Appliance

9. Click Close when the completion percentage reaches 100% and the import window displays Completed Successfully. Note: When using OVF import for installing the NSSVA Lite version, you will need to manually add a 100 GB data disk in order to launch the Basic environment configuration.

Installing the Snapshot Director on the ESX console server


The Snapshot Director for VMware must be installed on the ESX console server. You must be root (or root equivalent) in order to install the Snapshot Director. 1. Copy the installation files to the local drive of the ESX console server. client software (i.e. ipstorclient-x.xx-x.xxx.xxxx.rpm) gets installed first. Snapshot Director (i.e. asd_vmware-x.xx-xxxx.xxxx.rpm) gets installed second. 2. Type the following command to install the client software:
rpm -ivh --nodeps /mnt/cdrom/Client/Linux/ipstorclient-x.xx-x.xxx.i386.rpm

The client will be installed to the following location: /usr/local/ipstorclient It is important that you install the client to this location. Installing the client to a different location will prevent the client driver from loading. 3. Install the Snapshot Director software. # rpm -ivh asd_vmware-x.xx-xxxx.i386.rpm Note that during installation, several firewall ports will be opened to allow for snapshot notification and command line communications. Note: The ASD is not available in the NSSVA Lite or Trial version.

Installing SAN client software on virtual host machines


FalconStor SAN Client software must be installed on each virtual host machine. It runs silently in the background, requires no configuration, and is used to initiate snapshots. 1. Navigate to the NSS Agents zip file that you copied earlier to a Windows machine. 2. Extract the file from the uploaded zip file. 3. Select Install Products --> Install SAN Client If the installation does not launch automatically, navigate to the \Client\Windows directory and run ISinstall.exe to launch the client install program.

NSS Virtual Appliance User Guide

14

Install NSS Virtual Appliance

During the installation, the Microsoft Digital Signature Warning window may appear to indicate that the software has not been certified by Microsoft. Click Yes to continue the installation process. 4. Accept the license agreement.
5. When done, click Finish.

Notes:

If you are running Windows Server 2003 SP2 on the virtual machine and the firewall is enabled, you need to open TCP ports 11576, 11582, and 11762 for the SAN Client. The SAN Client is not available in the NSSVA Lite or Trial version.

Installing Snapshot Agents on virtual host machines


Installation of the Snapshot Agents has the following requirements: You must be an administrator or have administrator privileges in order to install. SAN Client software must already be installed on the virtual machine. If you install a snapshot agent for an application (such as Microsoft Exchange, Microsoft SQL, or Oracle), you must install the Windows filesystem snapshot agent. (Snapshot Agent for Microsoft Exchange) The Snapshot Agent has to be installed on the same virtual machine where the Exchange Server is running. Your Exchange Server must be started before installing the agent. (Snapshot Agent for Microsoft SQL) The Snapshot Agent has to be installed on the same machine where the SQL Server database is running. Your SQL Server must be started before installing the agent. (Snapshot Agent for Oracle) Your Oracle database must be started before installing the agent. Oracle archive logging must be turned on. (Oracle 8i only) Make sure the required library %ORA_HOME%/precompile/ lib/orasql8.LIB is present in the system. If the file is not present, reinstall the Oracle client software and select Programmer as the installation type.

To install a FalconStor Snapshot Agent on a Windows system: 1. Navigate to the NSS Agents zip file that you copied earlier to a Windows machine. 2. Extract all of the files to a temporary installation directory. 3. Launch the selected Snapshot Agent setup program. 4. When prompted, review the License Agreement and agree to it to continue.
After accepting the license agreement, the installation program will install the Snapshot Agent into the same directory where the SAN Client is installed. 5. When done, click Finish.

NSS Virtual Appliance User Guide

15

Install NSS Virtual Appliance The SAN client automatically starts the Snapshot Agent for you. In addition, it will be automatically started each time the client is restarted.

Note: The Snapshot Agent is not available in the NSSVA Lite or Trial version.

NSS Virtual Appliance configuration


Basic system environment configuration
Before starting NSSVA, it is recommended that you first add a virtual disk to NSSVA for data storage. Refer to Add virtual disks for data storage on page 22 for detailed instructions. Then return to this section to continue configuration. The first time you log into the NSSVA console, the FalconStor Virtual Appliance Setup utility pops up automatically and displays the basic environment configuration as shown in the picture below. If you want to configure the system after the initial setting, you can run the utility by executing the vaconfig command on the NSSVA virtual appliance console. Once you run the vaconfig utility, the system checks if VMware Tool should be updated. 1. Launch the VMware Infrastructure Client and connect to the ESX server by the account with root privilege. 2. Right-click the installed FalconStor-NSSVA then click Open Console. If the NSSVA has not been powered on, click VM on the top menu then click the Power On. 3. On the NSSVA console, login as a root user. The default password is IPStor101 (case sensitive). The FalconStor Virtual Appliance Setup utility launches. 4. Move the cursor to <Configure> and scroll to select the item you want to change.

5. Highlight Host Name and click Enter to configure the host name of the virtual appliance.

NSS Virtual Appliance User Guide

16

Install NSS Virtual Appliance

6. Highlight Time Zone and click Enter to configure the time zone. Select whether you want to set the system clock to UTC (the default is No). Scroll up and down to search for the correct time zone of your location. 7. Highlight Root Password and click Enter to change the new root password of the virtual appliance. You will need to enter the new password again on the confirm window. 8. Highlight Network Configuration and click Enter to modify your network configurations. Select eth0 or eth1 to change the IP address setting. Answer No to using DHCP and then set the IP address of the selected virtual network adapter. If you want to set the IP subnet mask, press down to move the cursor on the netmask setting. The default IP addresses are listed below: eth0: 169.254.254.1/255.255.255.0 eth1: 169.254.254.2/255.255.255.0 9. Repeat the network configuration to set the IP address of another virtual network adapter. 10. Highlight Default Gateway and click Enter to change the new default gateway of the virtual appliance. 11. Highlight Name Server and click Enter to modify the server name. You can add four DNS server records into the virtual appliance setting. 12. Highlight NTP Server configuration and click Enter to add four DNS server records into the virtual appliance setting. 13. After making all configuration changes, tab over to Finish and click Enter. The utility will list the configuration changes you made.
14. Click Yes to accept and apply the setting on the virtual appliance.

15. Close the utility.

NSS Virtual Appliance User Guide

17

Install NSS Virtual Appliance

The update VMware tool script is launched and you are prompted to update VMware tools. 16. Enter the ESX inventory host name of this NSSVA (Indicated by the display name of NSSVA on ESX server) 17. Enter ESX/vCenter server IP 18. Enter ESX/vCenter server login user name 19. Enter ESX/vCenter server login password If the VMware tool is old, it will be updated; Otherwise, it will not be replaced. If an error is encountered during the update, such as an inability to reach the ESX/vCenter, you will be prompted to Force (press F) the update or Cancel (press C). If you cancel the update, the NSSVA VMware tool will not be changed and you will need to update the VMware tool via the vSphere client. Alternatively, you can enter "chk_vm.sh" in the NSSVA serial console to re-run the update script. Once the installation is complete, you can begin configuration of the NSSVA via the FalconStor Management Console. Refer to the Configuration and Management chapter for details. Once configuration is complete, refer to the checklist at the end of this guide.

NSS Virtual Appliance User Guide

18

NSS Virtual Appliance User Guide

Configuration and Management


Installing and using the FalconStor Management console
The FalconStor Management Console is the central management tool to manage and configure the FalconStor NSS Virtual Appliance system. You will use the console for SAN Client and SAN Resource creation, replication and high availability configuration. Note: Replication and High Availability features are not available in the NSSVA Lite or Trial versions of NSSVA. The FalconStor Management Console can be installed on any Windows 2000, Windows XP, Windows Server 2003 or Windows Server 2008 system. It is recommended that you install the FalconStor Management Console and VMware Infrastructure Client on the same computer. 1. Unzip the FalconStor NSS Virtual Appliance package and then run the setup program. 2. Click Next on the console setup to start the installation. 3. Read the License Agreement and click Yes if you agree to the terms. 4. Enter the User Name and Company Name on the Customer Information screen. 5. On the Choose Destination Location, change the installation folder or click Next to accept the default destination: "C:\Program Files\FalconStor\IPStor". 6. On the Select Program Folder, click Next to accept the default program folder: FalconStor\IPStor. 7. Review the settings on the Start Copying File and click Next to start the program files installation. 8. Click Finish to close the FalconStor Management Console Setup program. On the FalconStor Management Console, you can manage several FalconStor NSS Virtual Appliances simultaneously. You can configure replication and failover, but you will need to register and connect to both NSSVA to complete the settings between the NSS Virtual Appliances.

NSS Virtual Appliance User Guide

19

Configuration and Management

Account Management
There are three types of accounts for the virtual appliance, each with different permission levels. The three accounts have the same default password.

fsadmin - can perform any VA operation other than managing accounts. They are also authorized for VA client authentication. fsuser - can manage virtual devices assigned to them and can allocate space from the storage pool(s) assigned to them. In addition, they can create new SAN/NAS resources, clients, and groups as well as assign resources to clients, and join resources to groups, as long as they are authorized. VA Users are also authorized for VA client authentication. Any time an VA User creates a new SAN/ NAS resource, client, or group, access rights will automatically be granted for the user to that object. root user - has full privileges for all the system operations. Only root can manage the user account and system configuration (maintenance).

Connect to the virtual appliance


1. Click Start --> All Programs --> FalconStor --> IPStor, and then click the IPStor Console. 2. Right-click the Servers and click Add. 3. Enter the IP address of NSSVA eth0. Use the default administrator account "root" and enter the default administrator password "IPStor101".

NSS Virtual Appliance User Guide

20

Configuration and Management

The connected NSS Virtual Appliance is listed on the FalconStor Management Console as shown below. The default host name is "FalconStor-NSSVA".

Add License Keycode


You have to enter a license keycode to enable server functionality. You can find your license keycode on the server license agreement or you can use the trial keycode you obtained when you registered on the web site. To enter the keycode: 1. In the console, right-click on the NSSVA server and select License. 2. Click Add. 3. Enter the keycode then click OK. You can click the License Summary tab to check the details of the license.

Register keycodes
If your computer has Internet access, the console will register a keycode automatically after you enter it; otherwise the registration will fail. You can have a 60 day grace period to use the product without a registered keycode (or 30 day grace period for a trial). If this machine cannot connect to the Internet, you can perform offline registration. To register a keycode: 1. Highlight an unregistered keycode and click the Register button.
NSS Virtual Appliance User Guide 21

Configuration and Management

2. Click Next to start the activation. 3. On the Select the method to register this license page, indicate if you want to perform Online registration via the Internet or Offline registration. 4. For offline registration, enter a file name to export the license information to local disk and E-mail it from a computer with Internet access to: activate.keycode@falconstor.com It is not necessary to write anything in the subject or body of the e-mail. If your E-mail is working correctly, you should receive a reply within a few minutes. 5. When you receive a reply, save the attached signature file to the same local disk. 6. Enter the path to the saved file in step 5 and click Send to import the registration signature file. 7. Afterwards, you will see a message stating that the license was registered successfully.

Add virtual disks for data storage


The FalconStor NSS Virtual Appliance supports up to four TB of space for storage virtualization. Before you create the virtual disks for the virtualization storage, you should know the block size of the datastore volume, and the maximum size of one virtual disk size controlled by the volume block size. If you create a virtual disk that exceeds the maximum size supported by its located volume, an "Insufficient disk space on datastore" error will display. You can resolve the error by changing to the disk size supported by the volume block. Volume Block Size 1MB 2MB 4MB 8MB Maximum size of one virtual disk 256GB 512GB 1024GB 2048GB

You can check the block size of your volume via the VMware Infrastructure Client: 1. Launch the VMware Infrastructure Client, connect to the ESX server and log into the account with root privileges. 2. Click the ESX server in the inventory and then click the Configuration setting.

NSS Virtual Appliance User Guide

22

Configuration and Management

3. On the Configuration tab, click Storage under the Hardware list. Then right-click one of the datastores and click Properties. On the Volume Properties, you can see the Block Size and the Maximum File Size in the Format information. The screen below displays VMware Volume properties with the block size and maximum file size information.

Add the new virtual disk


In the FalconStor NSS Virtual Appliance, you do not need to power-off the virtual appliance to add the new virtual disk for storage virtualization usage. 1. On the Virtual Infrastructure Client, right-click the NSS Virtual Appliance: FalconStor-NSSVA and then click Edit Settings. 2. On the Hardware tab, click the Add button. 3. For Select Device Type, click Hard Disk and then click Next. 4. For Select a Disk, click Create a new virtual disk and then click Next. 5. When prompted to Specify Disk Capacity, Provisioning, and Location, enter the Disk Size of the new virtual disk. Make sure the value does not exceed the maximum file size supported by the volume.

NSS Virtual Appliance User Guide

23

Configuration and Management

6. Check the Support clustering features such as Fault Tolerance option to force creation of an eagerzeroedthick disk. Notes: Do not set EagerZeroThick to both the system/data vmdks and guest VM's vmdks. Creating an EagerZeroThick disk is a time-consuming process. You may experience a significant waiting period, 7. If the volume of the NSS Virtual Appliance system does not have enough space to store the new virtual disk, click Specify a datastore then click the Browse button. Then Select a datastore by clicking a datastore with available free space. 8. Click Next to keep the default values on Specify Advanced Options. 9. Review your choices and click Finish to complete the virtual disk creation setting. In the FalconStor-NSSVA Virtual Machine properties, you will see New Hard Disk (adding) in the hardware list, 10. Click OK to save the setting and the new virtual disk will be created on the datastore. 11. Repeat the steps above to add another virtual disk for virtualization storage.

Add the new device to the storage pool


Once you have added the virtual disk for the NSS Virtual Appliance in the virtual machine setting, the NSSVA system must identify those disks and add the new device into the storage pool. For High Availability, refer to the High Availability chapter. To add a new device to the storage pool, follow the steps below: 1. On the FalconStor Management Console, click and expand the FalconStorNSSVA configuration. 2. Right-click the Physical Resources, and then click Rescan. 3. Click Discover New Devices, and then click OK. The New Device Detected (FalconStor-NSSVA) window displays listing the newly discovered physical devices. Each device's Category displays Unassigned. 4. On the new device detected window, select one of the discovered device and then click the Prepare Disk button.

NSS Virtual Appliance User Guide

24

Configuration and Management

5. On the Disk Preparation screen, click the drop-down list of Device Category and select Reserve for Virtual Device and then click OK. Then enter YES to confirm the change. When the task has completed, a message stating "Physical device category has been changed successfully" will display. 6. Repeat steps 4 and 5 to change the device category of all new detected devices to "Reserved for Virtual Device". 7. Highlight Physical Resources and click to expand the Storage Pools. Then right click the StoragePool-Default and click the Properties. 8. On the Storage Pool Properties screen, click the Select All button and then click OK to add all new detected devices into the storage pool. 9. Click and expand the StoragePool-Default to see all configured new devices that have been added into the pool for central space management. All devices must be added into the storage pool for central resource management.

Create a SAN Client for VMware ESX server


Create a SAN client for each VMware ESX server for storage resource assignment. On the VMware ESX server, you can login the console and use the vmkping command to test the IP network connection from the ESX server iSCSI software adapter to the NSSVA. In addition, you can add the NSSVA IP into the iSCSI server list of the iSCSI software adapter and check whether the iSCSI initiator name registered on the NSSVA.

NSS Virtual Appliance User Guide

25

Configuration and Management

Adding the iSCSI server on ESX Software iSCSI Adapter

1. Launch VMware Infrastructure Client and connect to the ESX server by the account with root privilege. 2. Once you are connected to the server inventory, highlight the ESX server and click the Configuration tab. 3. On the ESX server Configuration screen, click the Storage Adapters and rightclick the device under iSCSI Software Adapter, for example: vmhba32. 4. Select the iSCSI software adapter device and then click Properties. 5. On the iSCSI initiator (device name) Properties, check the iSCSI properties and record the iSCSI name, for example: iqn.1998-01.com.vmware:esx03. 6. Click the Dynamic Recovery tab, and then click the Add button. 7. On Send Targets, enter the IP address of the NSS Virtual Appliance. 8. It will take several minutes to complete the configuration. 9. Once the IP address has been added into the iSCSI server list, click Close to complete the setting.

Creating the SAN Client for the ESX server

1. Launch the FalconStor Management Console and connect to the NSS Virtual Appliance with IPStor administrator privileges. 2. Click and expand the NSSVA, then right click the SAN Clients and then click Add. 3. The Add Client Wizard launches. 4. Click Next to start the administration task. 5. When prompted to Select Client Protocols, click to enable the iSCSI protocol and click Next. 6. Select Target IP by enabling one or both networks providing the iSCSI service. 7. On the Set Client's initiator, the iSCSI initiator name of the ESX server displays if the iSCSI server was added successfully. Click to enable it and then click Next. 8. On Set iSCSI User Access, change it to Allow unauthenticated access or enter the CHAP secret (12 to 16 characters). 9. On Set iSCSI Options, keep the default setting of QoS. 10. On Enter the Generic Client Name, enter the Client IP address using the ESX server's IP address. 11. On Select Persistent Reservation Option, keep the default setting and click Next.
NSS Virtual Appliance User Guide 26

Configuration and Management

12. On Add the client, review all configuration settings and then click Finish to add the san client into the system. 13. Expand the SAN Clients. You will see the newly created SAN client for ESX server and the iSCSI Target. The screen below displays the SAN client and iSCSI target created for the ESX server connection.

Assign the SAN resource to VMware ESX server


FalconStor NSS Virtual Appliance provides simple and intuitive san resource management via the FalconStor management tool. As an administrator, you can easily create the SAN resource and assign it to ESX server. 1. On the FalconStor Management Console, click to expand the NSSVA. 2. Navigate to Logical Resources --> SAN Resources and click New. 3. The Create SAN Resource Wizard launches. 4. Click Next to start the administration task. 5. Select the SAN Resource Type by selecting the Virtual Device, and then clicking Next. 6. Select the Physical Resource for the Virtual Device(s) by selecting the StoragePool-Default under Storage Pools and then clicking Next.

NSS Virtual Appliance User Guide

27

Configuration and Management

7. Select Express as the Creation Method and enter the allocated size of the new san resource you are creating. 8. When prompted to Enter the SAN Resource Name, you can keep the default name created by system or manually change the name. 9. Confirm the allocate size on the Create the SAN resource screen and then click Finish to create the SAN resource. 10. Once the SAN Resource has been created, the Create SAN Resource Wizard prompts you to assign a SAN client to it. If you have already created the SAN client for the ESX server, click Yes. The Assign a SAN Resource Wizard will launch automatically. 11. Click Next to start the administration task. 12. Select the iSCSI target to be assigned to the SAN resource. 13. When prompted to Select LUN Numbers for the resources, click Next to keep the default setting. 14. Click Finish to Assign iSCSI Target(s) to the SAN Resource. If you answered NO on the Assign SAN Client process, you can perform this task later by right clicking on the specific SAN resource name under the SAN Resources tree, and then clicking Assign. The screen below displays the SAN client and iSCSI target created for the ESX server connection.

NSS Virtual Appliance User Guide

28

Configuration and Management

Assign the same SAN resource to two VMware ESX server


The SAN Resource plays the role of shared storage that is assigned to two ESX servers to create the VMware VMotion, VMware DRS and VMware HA solution. This kind of SAN resource must have read/write permission to both servers and be allowed to be accessed simultaneously. 1. On the FalconStor Management Console, click and expand the NSSVA and the SAN Clients; and then click and expand the ESX server and iSCSI. 2. Under the iSCSI, right click the iSCSI target created for ESX server and then click Properties. 3. On the iSCSI Target Properties screen, click the Access Mode drop-down list and change it to Read/Write Non-Exclusive to open the access with the other ESX servers that are assigned to the same resource. 4. Repeat the same Access Mode change on all iSCSI target of ESX servers that will share the same resource with others as shown in the screen below.

Note: For advanced configuration of high availability, refer to the documentation link that was included in your registration e-mail.

NSS Virtual Appliance User Guide

29

Configuration and Management

Enable VMDirectPath I/O in vSphere v4


Enabling VMDirectPath I/O in vSphere v4 for FalconStor Virtual Appliances (NSSVA) requires the following steps as described below: Part I - Enable the VMDirectPath Option Part II - Configure a virtual machine to use a passthrough VMDirectPath PCI device/port Part III - Modify a FalconStor Virtual Appliance (for ESX 3.5) to load VMware Drivers Part IV - Modifying a FalconStor Virtual Appliance (made for ESX 3.5) to properly load the NIC/HBA driver - For VA's that are pre-RHEL5.3 - Page XX

Enable the VMDirectPath Option


1. From the Inventory section, go to the Configuration tab.

2. Click on the Hardware Advanced Settings and Enable the VMDirectPath Configuration option.

3. Reboot. 4. Return to the Configuration tab, and navigate back to the Hardware Advanced Settings.

NSS Virtual Appliance User Guide

30

Configuration and Management

5. Click the Edit link to add the PCI device ports to the Passthrough List.

6. Reboot again. 7. Return to the Hardware Configuration Advanced Settings section to confirm the passthrough ports have been enabled: Once complete, you can follow the steps from the next few sections to add one or several ports to a given virtual machine. The following restrictions apply:

If you are using a dual-port NIC/HBA, the ENTIRE NIC is set to passthrough mode. This means both ports will disappear from the VMKernel. If you are using a dual-port NIC/HBA, the ENTIRE NIC is given to one specific virtual machine. Therefore, whether you assign one port or two ports to the VM, both ports are "reserved" and none can be given to another virtual machine. The pass through is at the PCI port level, so it's an all-or-nothing rule. Once a virtual machine (VM) has a pass through port assigned to it (following the procedures below), the VM can no longer be vmotion'ed (nor DRS'ed, nor HA'ed, nor FT'ed) to another ESX host. It becomes a permanent resident of the current ESX host. Once a VM has a pass through port assigned to it, it can no longer take advantage of "Memory over-allocation" (aka overcommitment); instead, the entire "allocated virtual RAM" must be "RESERVED" (done automatically). Thus, enough RAM on the host must be available for the VM to power on.

NSS Virtual Appliance User Guide

31

Configuration and Management

Configure a virtual machine to use a passthrough VMDirectPath PCI device/port


The steps necessary to configure a virtual machine (VM) to use a passthrough VMDirectPath PCI device/port are described below: 1. Navigate to File Deploy OVF Template.

2. Select the appropriate ovf file. 3. Right-click and select Upgrade Virtual Hardware. 4. Select Edit Settings.

5. Click the Hardware tab and select the appropriate network adapter.

NSS Virtual Appliance User Guide

32

Configuration and Management

6. Click the Add button to add hardware.

The Add Hardware screen displays. 7. Select the type of device you wish to add.

In this example, PCI device was chosen.


NSS Virtual Appliance User Guide 33

Configuration and Management

8. Specify the device to which you want to connect.

9. Click Finish to complete the operation.

NSS Virtual Appliance User Guide

34

Configuration and Management

Modify a FalconStor Virtual Appliance (for ESX 3.5) to load VMware Drivers
The procedures below illustrate how to modify a FalconStor Virtual Appliance (made for ESX 3.5) to properly load the updated VMware Drivers from VMware Tools for the updated Virtual Machine Hardware v7 (under vSphere v4). 1. Power ON the NSS-VA virtual machine. During the boot up process, you may see several FAILED error messages, which you can disregard for now. 2. Login to the system from the console with the user name: root and password: IPStor101. 3. Perform a VMware Tools upgrade. 4. Click Abort at the installation screen, then hit Ctrl+C on the following screen to exit back to the prompt. 5. Connect to the host device. The Install/Upgrade Tools screen displays. 6. Select Interactive Tools Upgrade then click OK. When you first try to install/upgrade the VMware Tools, you will get an error, and you will be prompted to remove the existing soft links. Once the symbolic links are removed, re-run the installation script (vmwareinstall.pl), and click [ENTER] through the next few screens. 7. Reboot the machine ("sync;sync;reboot" from the command prompt), and then configure the Virtual Appliance per the standard installation guide. The "vaconfig" script will automatically be executed, and you can then configure your network settings, hostname, NTP, DNS, etc. The virtual appliance will reboot automatically.

Modify a FalconStor Virtual Appliance (for ESX 3.5) to load the NIC/HBA driver
This step is necessary for virtual appliances that are pre-RHEL5.3. If you are using Red Hat Enterprise Linux 5.3 (RHEL5.3), then the Intel drivers for the 10Gbe NIC (or QLogic 8Gbps FC) will already be installed. If not, you will need to download, compile, and install the Intel drivers from Intel's web site. 1. Copy the file (i.e. ixgbe-2.0.38.2.tar.gz) to /root. The easiest way (since your network is down at this point), is to create an ISO file, and mount the ISO to the CD-ROM drive of the virtual machine, using the same commands as earlier to mount the CD (mount /dev/cdrom /media).

NSS Virtual Appliance User Guide

35

Configuration and Management

2. Run:
# tar xvfz ixgbe-2.0.38.2.tar.gz # cd ixgbe-2.0.38.2 # vi README

3. Follow the instructions to compile and load the driver.


# cd /root/ixgbe-2.0.38.2/src/ # make install # modprobe /lib/modules/2.6.18-53.1.19.el5/kernel/drivers/net/ ixgbe/ixgbe.ko

(or simply modprobe ixgbe). 4. Configure your network cards using the "vaconfig" command (if they are eth0 and eth1). If you are creating new files (for eth2, eth3, in case you did NOT remove the original eth0 and eth1 virtual NIC's from the VMware VM's Settings), use the following commands:
# cd /etc/sysconfig/network-scripts/ # vi ifcfg-eth2 DEVICE=eth2 BOOTPROTO=none IPADDR=192.168.88.112 NETMASK=255.255.255.0 ONBOOT=yes TYPE=Ethernet MTU=9000 DHCP_HOSTNAME=

Note: Make sure to set the MTU to 9000 if you want to use Jumbo Frames. 5. Repeat if you need to configure eth3. Make sure to modify all parameters from the content of the file above to match the proper settings (IPADDR, DEVICE, etc). 6. Update the "/etc/modprobe.conf" file to make sure the ixgbe driver is loaded during startup:
# vi /etc/modprobe.conf alias eth0 ixgbe alias eth1 ixgbe alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptspi alias scsi_hostadapter2 ata_piix install pciehp /sbin/modprobe -q --ignore-install acpiphp; /bin/ true install pcnet32 /sbin/modprobe -q --ignore-install vmxnet;/sbin/ modprobe -q --ignore-install pcnet32 $CMDLINE_OPTS;/bin/true alias char-major-14 sb options sb io=0x220 irq=5 dma=1 dma16=5 mpu_io=0x330 NSS Virtual Appliance User Guide 36

Configuration and Management

7. Create a new boot image, and reboot to confirm the changes:


# cd /boot/ # mkinitrd -f initrd-2.6.18-53.1.19.el5-8Jun09-230047.img 2.6.1853.1.19.el5

Note: Replace the name of the.img file from the command above, with the .img filename indicated in your menu.lst file, in the VERY LAST LINE, for example: # cat /boot/grub/menu.lst # grub.conf generated by anaconda You do not have to rerun grub after making changes to this file. 8. After reboot, run "ifconfig", and confirm your changes.

NSS Virtual Appliance User Guide

37

NSS Virtual Appliance User Guide

High Availability
FalconStor NSS Virtual Appliance High Availability (HA) solution
FalconStor NSS Virtual Appliance supports High Availability storage service via two NSS Virtual Appliances in a Cross-Mirror and iSCSI service failover design. The High Availability (HA) option is not available for the Single Node Edition, the Lite version or the trial version of NSSVA. For best results using the high availability architecture, make sure all of the configurations follow the best practice instructions and guidelines in this chapter.

NSS Virtual Appliance User Guide

38

High Availability

Configuring the NSS Virtual Appliance Cross-Mirror failover


Refer to the checklist tables in Appendix A to verify your NSS Virtual Appliance environment and configuration before setting up Cross-Mirror high availability. 1. Launch the FalconStor Management Console, adding and connecting to both the primary and secondary NSS Virtual Appliance. 2. Expand the SCSI Devices of both NSS Virtual Appliances and make sure you have the same number devices with same size and SCSI ID on both NSS Virtual Appliances.

3. Right-click the primary NSSVA. Then point to the failover appliance and launch the Failover Setup Wizard. The Failover Setup Wizard checks to make sure the iSCSI option is enabled on the primary NSSVA. Make sure the iSCSI option is also enabled on the secondary NSSVA. iSCSI is the default service running on the NSSVA. 4. Click Next on the welcome page of Failover Setup Wizard to start the crossmirror configuration. 5. Click Yes to re-scan the physical devices to guarantee the device number and size on both server are equal. You will see the power control enabling suggestions information after the wizard completes. You can ignore this message. 6. At the Configure Cross Mirror Option screen, click Next to start the disk preparation and mirror realization ship creation.

NSS Virtual Appliance User Guide

39

High Availability

To save the system configuration for failover purposes, a configuration repository is required for failover primary server. 7. Click OK to close the information. 8. When prompted to Select the Secondary Server and the Cross Mirror Remote Server IP address, enter the IP Address on the Primary Server using the eth1 IP address of the primary NSSVA. Then enter the IP Address on the Secondary Server using the eth1 IP address of secondary NSSVA. Alternatively, you can enter the primary server IP address and then click the Find button to have the wizard retrieve the IP address in the same IP subnet from the secondary NSSVA. The wizard completes the task of Checking the secondary server settings. 9. When prompted to Configure Remote Storage, make sure all devices have been checked and enabled. 10. Click OK to close the dialog screen. The Enable Configuration Repository Wizard launches. 11. Click Next to start the configuration task. 12. When prompted to Select the Physical Resources for the Virtual Device(s), select a physical device with at least 10 GB of available space to save the configuration repository. If all physical devices are 10 GB or larger, you can click Next to continue the configuration. 13. When prompted to Select the Physical Device, select a physical device that is at least 10 GB and click Next. 14. Click Finish to confirm the selected physical device on the Create the Configuration Repository screen and complete the creation of configuration repository. 15. The IPStor User List displays, prompting you for the user name and password. Make sure they match on both the primary and secondary NSSVA and click OK. The Select the Failover Subnets dialog displays as the wizard retrieves the IP addresses of both the primary and secondary NSSVA and the IP subnet (except the interface used by Cross-Mirror). 16. Confirm all information is correct and click Next.

NSS Virtual Appliance User Guide

40

High Availability

17. Enter the IP address of the server: <the primary NSSVA host name> using the client access IP address. The ESX server iSCSI Software Adapter uses this IP address to log into the iSCSI target and connect the SAN resource. This IP address will failover to the secondary NSSVA if the primary NSSVA encounters the problem. Note: It is recommended that you use the original eth0 IP address here so you will not need to re-configure the FalconStor Management Console connection. 18. Enter the Health monitoring IP address for the Server <the primary NSSVA host name> by the new eth0 IP address of the primary NSSVA. Note: It is recommended that you create a new eth0 IP address here so you will not need to re-configure the FalconStor Management Console connection. 19. Confirm the Failover Configuration by reviewing the settings and clicking Finish to complete the failover configuration creation. The wizard will recommend that you make sure the clocks are in-sync between the failover servers, 20. Click OK to close the wizard. Notes:

Once the configuration of cross-mirror failover is complete via failover setup wizard, the Power Control option in the FalconStor Management Console must not be changed. If you do not use the original eth0 IP address as the client access IP, you must delete the primary NSSVA record from the FalconStor Management Console and re-add the primary NSSVA using the new client access IP address.

You are now ready to setup the power control patch to complete the failover settings. Refer to Power Control for VMware ESX server.

NSS Virtual Appliance User Guide

41

High Availability

Power Control for VMware ESX server


You must configure the power control options for your failover servers by using the NSSVA Power Control Utility. You can configure the NSSVA Power control utility to power off the primary NSSVA if all communication between the two NSSVA servers fail. The power control options include the following options: Primary ESX server connection Secondary ESX server connection Primary ESX server root password Force takeover setting Primary NSSVA network test Power control test

The power control options for the VMware ESX server are used to avoid an unplanned take over by an ESX server physical network problem. The NSSVA Power control utility does the following: The NSSVA uses the cross-mirror (via iSCSI connection) so that it does not use the same storage. Sets the connection to the primary ESX server that can send the power off command from the secondary NSSVA to the primary ESX server if necessary. For example, you would use this if the primary NSSVA hangs and cannot answer any failover commands. If the secondary NSSVA cannot send the power off command to the primary ESX server, it will not take over in a default configuration setting. Sets the IP address of the secondary ESX server so that it can ping the IP addresses from the primary NSSVAs to check the network connection. If the force take over option is enabled, the primary NSSVA checks the network connection periodically. Once the network disconnects and a force take over is enabled, it shuts down the primary NSSVA after 30 seconds. The Takeover option is disabled by default. You will need to enable this option using the NSSVA power control (vapwc-config) utility to force the secondary NSSVA to take over. Enable this option if you want the secondary NSSVA to always take over when there is no communication with the primary ESX server.

NSS Virtual Appliance User Guide

42

High Availability

Launching the power control utility


To launch the power control utility: 1. Launch the VMware Infrastructure Client and connect to the ESX server account with root privileges. 2. Right-click the installed FalconStor-NSSVA and select Open Console. If the NSSVA has not been powered on, click VM on the top menu and click Power On. 3. On the NSSVA console, login as a root user. The default password is IPStor101 (case sensitive). 4. Launch the power control utility by typing vapwc-config The failover configuration is detected. 5. Select Yes if the detected failover configuration on your NSSVA system is correct. (if you select No, the configuration steps will be skipped.) The FalconStor NSS Power Control Configuration main menu displays, allowing you to run the following options: Primary ESX server connection Secondary ESX server connection Primary ESX server root password Force takeover setting Primary NSSVA network test Power Control test 6. Select Primary ESX server connection and enter or verify the primary ESX IP address. For optimum reliability, you must enter at least two ESX service console IP addresses for the primary server. The power control utility will ping the IP address to verify the configuration. 7. Select Secondary ESX server connection and enter or verify the secondary ESX IP address. At least two ESX service console IP addresses must be entered for the secondary ESX server. The IP addresses are used to test network availability on the primary NSSVA appliance. The power control utility will ping the IP address to verify the configuration. 8. Select Primary ESX server root password and enter or verify the primary root password. The root password field cannot be left blank. 9. Select Force takeover setting to enable or disable Force takeover.

NSS Virtual Appliance User Guide

43

High Availability

This setting is disabled by default. If you choose Yes, this option enables the network monitor function on the primary NSSVA. The primary NSSVA will shut itself down if a physical connection failure is detected. Use this option with caution as data inconsistency may occur between the primary and the secondary NSSVA in a force takeover situation. 10. Select Primary NSSVA network test to test the network connection of the ESX server. The primary NSSVA network test connects to the primary NSSVA and pings the reference IP addresses on the primary NSSVA, the secondary NSSVA IP address, the secondary NSSVA cross-mirror IP address, the primary NSSVA default gateway IP address, and the secondary ESX server IP address. 11. Select Power control test to test the sending power control command. Power control from the secondary NSSVA to the primary ESX server is verified. Once all communication tests to the primary ESX server are successful, you can click OK to continue the configuration. Failover setup is now complete.

NSS Virtual Appliance User Guide

44

High Availability

Check Failover status


You can see the current status of your failover configuration, including all settings, by checking the Failover Information tab for the server.

Failover settings, including which IP addresses are being monitored for failover.

Current status of failover configuration.

In addition, you will see a colored dot next to a server to indicate the following conditions: Red dot - The server is currently in failover mode and has been taken over by the secondary server. Green dot - The server has taken over the primary server's resources. Yellow dot - The user has suspended failover on this server. The current server will NOT take over the primary server's resources even it detects abnormal condition from the primary server.

Failover events are also written to the primary server's Event Log, so you can check there for status and operational information, as well as any errors. You should be aware that when a failover occurs, the console will show the failover partners Event Log for the server that failed.

NSS Virtual Appliance User Guide

45

High Availability

After failover
When a failed server is restarted, it communicates with the acting primary server and must receive the okay from the acting primary server in order to recover its role as the primary server. If there is a communication problem, such as a network error, and no notification is received, the failed server remains in a 'ready' state but does not recover its role as the primary server. After the communication problem has been resolved, the storage server will be able to recover normally. If failover is suspended on the secondary server, or if the failover module is stopped, the primary will not automatically recover until the ipstorsm.sh recovery command is entered. If both failover servers go offline and then only one is brought up, type the ipstorsm.sh recovery command to bring the storage server back online.

Manual recovery
Manual recovery is the process by which the secondary server releases the identity of the primary to allow the primary to restore its operation. Manual recovery can be triggered by selecting the Stop Takeover option from the FalconStor Management Console. If the primary server is not ready to recover, and you can still communicate with the server, a detailed failover screen displays. If the primary server is not ready to recover, and you cannot communicate with the server, a warning message displays.

Auto recovery
You can enable auto recovery by changing the Auto Recovery option after failover, when control is returned to the primary server once the primary server has recovered. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode.

Fix a failed server


If the primary server fails over to the secondary and hardware changes are made to the failed server, the secondary server will not be aware of these changes. When failback occurs, the original configuration parameters will be returned to the primary server. To ensure that both servers become synchronized with the new hardware information, you will need to issue a physical device rescan for the machine whose hardware has changed as soon as the failback occurs.

NSS Virtual Appliance User Guide

46

High Availability

Recover from a cross-mirror failure


Whether your cross-mirror disk was brought down for maintenance or because of a failure requires that you follow the procedure listed below to properly bring up the cross-mirror appliance. When powering down both servers in a cross-mirror configuration for maintenance, the server must be properly brought up as follows in order to successfully recover from failover. If the cross-mirror environment is in a healthy state, all resources are in sync, and all storage is local to the server (none have swapped), the procedure would be as follows. 1. Stop NSS on the secondary and wait for the primary to take over. 2. Power down the server. 3. After the primary has successfully taken over, stop NSS on the primary server and power it down as well. Note: This would be considered a graceful way of powering down both servers for maintenance. After maintenance is complete this would be the proper way to bring up the servers and put the servers in a healthy and up state. 4. Power up the primary server. 5. Power up the secondary server. NSS will automatically start. 6. Verify in the /proc/scsi/scsi that both servers can see their remote storage (usually identified by having 50 as the adapter number, for example the first lun would be 50:0:0:0.) If this is not the case restart the iSCSI initiator or re-login to the servers respective targets to see the remote storage. Restarting the iSCSI initiator: restart" "/etc/init.d/iscsi

Logging into a target: iscsiadm -m node -p <ipaddress>:3261,0 -T <remote-target-name> -l Example: "iscsiadm -m node -p 192.168.200.201:3261,0 -T iqn.2000-03.com.falconstor:istor.PMCC2401 -l" 7. Once you have verified that both servers can see the remote storage, restart NSS on both servers. Failure to do so will cause problems recovering the server. 8. After NSS has been restarted verify using the "sms -v" command that both servers are in a ready state.

NSS Virtual Appliance User Guide

47

High Availability

Both servers should now be recovered and in a healthy state.

Re-synchronize Cross mirror on a virtual appliance


After recovering from a cross mirror failure, the virtual disks will be automatically resynchronized according to the server properties you have set up. You can click on the Performance tab to configure the synchronization options. The virtual disks will need to be manually re-synchronized if the disk is offline for more than 20 minutes. Right-click on the server and select Cross Mirror --> Synchronize to manually resynchronize the disks. You can remove cross mirror failover to enable both virtual servers to act as a standalone storage server. To remove the cross mirror failover: 1. Remove cross mirror failover from the console. 2. Restart both virtual servers. 3. Re-login to the servers and manually remove all mirrors from the virtual devices left behind after cross-mirror removal. This can also be done in batch mode by right-clicking SAN resources --> Mirror --> Remove.

Check resources and swap if possible


Swapping takes place when data functions are moved from a failed disk on the primary server to the mirrored disk on the secondary server. Afterwards, the system automatically checks every hour to see if the disks can be swapped back. If the disk has been replaced/repaired and the cross mirror has been synchronized, you can force a swap to occur now. To do this, select Cross Mirror --> Check & Swap. The system will check that the local mirror disk is usable and that the cross mirror is synchronized. If they are, the system will swap disks. You can check the Layout tab for the SAN resource afterwards to see the status.

Verify and repair a cross mirror configuration


Use the Verify & Repair option for the following situations: A physical disk used by the cross mirror has been replaced A mirror resource was offline when auto expansion occurred Create a mirror for virtual resources that existed on the primary server prior to configuration View the storage exception information that cannot be repaired and requires further assistance.

NSS Virtual Appliance User Guide

48

High Availability

When replacing local or remote storage, if a mirror needs to be swapped first, a swapping request will be sent to the server to trigger the swap. Storage can only be replaced when the damaged segments are part of the mirror, either local or remote. New storage has to be available for this option. Note: If you have replaced disks, you should perform a rescan on both servers before using the Verify & Repair option. To use the Verify & Repair option: 1. Log into both cross mirror servers. 2. Right-click on the primary server and select Cross Mirror --> Verify & Repair. 3. Click the button for any issue that needs to be corrected. You will only be able to select a button if that is the scenario where the problem occurred. The other buttons will not be selectable. Resources If everything is working correctly, this option will be labeled Resources and will not be selectable. The option will be labeled Incomplete Resources for the following scenarios: The mirror resource was offline when auto expansion (i.e. Snapshot resource) occurred but the device is now back online. You need to create a mirror for virtual resources that existed on the primary server prior to cross mirror configuration.

1. Right-click on the server and select Cross Mirror --> Verify & Repair.

NSS Virtual Appliance User Guide

49

High Availability

2. Click the Incomplete Resources button.

3. Select the resource to be repaired. 4. When prompted, confirm that you want to repair this resource. Remote Storage If everything is working correctly, this option will be labeled Remote Storage and will not be selectable. The option will be labeled Damaged or Missing Remote Storage when a physical disk being used by cross mirroring on the secondary server has been replaced. Note: You must suspend failover before replacing the storage. 1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

NSS Virtual Appliance User Guide

50

High Availability

2. Click the Damaged or Missing Remote Storage button.

3. Select the remote device to be repaired.

NSS Virtual Appliance User Guide

51

High Availability

Local Storage

If everything is working correctly, this option will be labeled Local Storage and will not be selectable. The option will be labeled Damaged or Missing Local Storage when a physical disk being used by cross mirroring is damaged on the primary server and has been replaced. Note: You must suspend failover before replacing the storage. 1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

2. Click the Damaged or Missing Local Storage button.

3. Select the local device to be replaced.

NSS Virtual Appliance User Guide

52

High Availability

4. Confirm that this is the device to replace. Storage and Complete Resources If everything is working correctly, this option will be labeled Storage and Complete Resources and will not be selectable. The option will be labeled Resources with Missing segments on both Local and Remote Storage when a virtual device spans multiple physical devices and one physical device is offline on both the primary and secondary server. This situation is very rare and this option is informational only. 1. Right-click on the server and select Cross Mirror --> Verify & Repair.

2. Click the Resources with Missing segments on both Local and Remote Storage button.

NSS Virtual Appliance User Guide

53

High Availability

You will see a list of failed devices. Because this option is informational only, no action can be taken here.

Modify failover configuration


Make changes to the servers in your failover configuration
The first time you set up your failover configuration, the secondary server cannot have any Replica resources. In order to make any changes to a mutual failover configuration, you must be running the console with write access to both servers. NSS will automatically log on" to the failover pair when you attempt any configuration on the failover set. While it is not required that both servers have the same username and password, the system will try to connect to both servers using the same username and password. If the servers have different usernames/passwords, it will prompt you to enter them before you can continue. Change physical device If you make a change to a physical device (such as if you add a network card that will be used for failover), you will need to re-run the Failover wizard. Be sure to scan both servers during the wizard. At that point, the secondary server is permitted to have Replica resources. This makes it easy for you to upgrade your failover configuration. Change subnet If you switch IP segments for an existing failover configuration, the following needs to be done: 1. Remove failover from both storage servers. 2. Delete the current failover servers from the FalconStor Management Console. 3. Make network modifications to the storage servers (i.e. change IP segments). 4. Add the storage servers back to the FalconStor Management Console. 5. Configure failover using the new IP segment.

Start/stop failover or recovery


Force a takeover by a secondary server
On the secondary server, select Failover --> Start Takeover <servername> to initiate a failover to the secondary server. You may want to do this if you are taking your primary server offline, such as when you will be performing maintenance on it. Once failover is complete, a failover message will blink in red at the bottom of the console and you will be disconnected from the primary server.

NSS Virtual Appliance User Guide

54

High Availability

Manually initiate a recovery to your primary server


Select Failover --> Stop Takeover if your failover configuration was not set up to use FalconStors Auto Recovery feature and you want to force control to return to your primary server or if you manually forced a takeover and now want to recover to your primary server. Once failback is complete, you will be logged off from the virtual primary server.

Suspend/resume failover
Select Failover --> Suspend Failover to stop monitoring its partner server. In the case of active-passive failover, you can suspend from the secondary server. However, the server that you suspend from will stop monitoring its partner and will not take over for that partner server in the event of failure. It can still fail over itself. Select Failover --> Resume Failover to restart the monitoring. Notes: If the cross mirror link goes down, failover will be suspended. Use the Resume Failover option when the cross mirror link comes back up. The disks will automatically be re-synced at the scheduled interval or you can manually synchronize using the cross mirror synchronize option. If you stop the NSS processes on the primary server after suspending failover, you must do the following once you restart your storage server:
1. At a Linux command prompt, type sms to see the failover status. 2. When the system is in a ready state, type the following: ipstorsm.sh

recovery
Once the connection is repaired, the failover status is not cleared until failover is resumed on both servers.

Remove a failover configuration


Right-click on one of your failover servers and select Failover --> Remove Failover Server to remove the selected server from the failover configuration. In a one-way failover configuration, this eliminates the configuration and returns the servers to independent storage servers. Using cross mirror failover after removing the cross mirror relationship, you will notice that cross-mirror is gone but the configuration of your iSCSI initiator remains and the disks is still presented to both primary and secondary servers.

NSS Virtual Appliance User Guide

55

NSS Virtual Appliance User Guide

Replication
Overview
Replication is the process by which a SAN Resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN Resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk. If a disaster occurs and the replica is needed, the administrator can promote the replica to become a SAN Resource so that clients can access it. Replica disks can be configured for NSS storage services, including backup, mirroring, or TimeMark/ CDP, which can be useful for viewing the contents of the disk or recovering files. Replication can be set to occur continuously or at set intervals (based on a schedule or watermark). For performance purposes and added protection, data can be compressed or encrypted during replication. Note: Replication is not available in the NSSVA Lite or Trial version.

Replication configuration
Requirements
The following are the requirements for setting up a replication configuration: (Remote replication) You must have two storage servers. (Remote replication) You must have write access to both Servers. You must have enough space on the target server for the replica and for the Snapshot Resource. Both clocks should be synchronized so that the timestamp matches. In order to replicate to a disk with thin provisioning, the size of the SAN resource must be equal to or greater than 10GB (the minimum permissible size of a thin disk).

NSS Virtual Appliance User Guide

56

Replication

Setup
You can enable replication for a single SAN Resource or you can use the batch feature to enable replication for multiple SAN Resources. You need Snapshot Resources for the primary and replica disks. If you do not have them, you can create them through the wizard. 1. For a single SAN Resource, right-click on the resource and select Replication -> Enable. For multiple SAN Resources, right-click on the SAN Resources object and select Replication --> Enable. Each primary disk can only have one replica disk. If you do not have a Snapshot Resource, the wizard will take you through the process of creating one. 2. Select the server that will contain the replica.

For local replication, select the Local Server. For remote replication, select any server but the Local Server. If the server you want does not appear on the list, click the Add button.

NSS Virtual Appliance User Guide

57

Replication

3. (Remote replication only) Confirm/enter the target servers IP address.

4. Specify if you want to use Continuous Replication.

Continuous Mode - Select if you want to use FalconStors Continuous Replication. After the replication wizard completes, you will be prompted to create a Continuous Replication Resource for the primary disk. Delta Mode - Select if you want replication to occur at set intervals (based on schedule or watermark).

NSS Virtual Appliance User Guide

58

Replication

Use existing TimeMark - Determine if you want to use the most current TimeMark on the primary server when replication begins or if the replication process should create a TimeMark specifically for the replication. In addition, using an existing TimeMark reduces the usage of your Snapshot Resource. However, the data being replicated may not be the most current. For example, Your replication is scheduled to start at 11:15 and your most recent TimeMark was created at 11:00. If you have selected Use Existing TimeMark, the replication will occur with the 11:00 data, even though additional changes may have occurred between 11:00 and 11:15. Therefore, if you select Use Existing TimeMark, you must coordinate your TimeMark schedule with your replication schedule. Even if you select Use Existing TimeMark, a new TimeMark will be created under the following conditions: - The first time replication occurs. - Each existing TimeMark will only be used once. If replication occurs multiple times between the creation of TimeMarks, the TimeMark will be used once; a new TimeMark will be created for subsequent replications until the next TimeMark is created. - The most recent TimeMark has been deleted, but older TimeMarks exist. - After a manual rescan. Preserve Replication TimeMark - If you did not select the Use Existing TimeMark option, a temporary TimeMark is created when replication begins. This TimeMark is then deleted after the replication has completed. Select Preserve Replication TimeMark to create a permanent TimeMark that will not be deleted when replication has completed (if the TimeMark option is enabled). This is convenient way to keep all of the replication TimeMarks without setting up a separate TimeMark schedule.

NSS Virtual Appliance User Guide

59

Replication

5. Configure how often, and under what circumstances, replication should occur.

An initial replication for individual resources begins immediately upon setting the replication policy. Then replication occurs according to the specified policy. You must select at least one policy but you can have multiple. You must specify a policy even if you are using continuous replication. This way, if the system switches to delta replication, it can automatically switch back to continuous replication after the next regularly-scheduled replication takes place. Any number of continuous replication jobs can run concurrently. However, by default, 20 delta replication jobs can run, per server, at any given time. If an additional job is ready to run, pending jobs will wait until one of the current replication jobs finish.. Note: Contact Technical Support for information about changing this value but note that additional replication jobs will increase the load and bandwidth usage of your servers and network and may be limited by individual hardware specifications. Start replication when the amount of new data reaches - If you enter a watermark value, when the value is reached, a snapshot will be taken and replication of that data will begin. If additional data (more than the watermark value) is written to the disk after the snapshot, that data will not be replicated until the next replication. If a replication that was triggered by a watermark fails, the replication will be re-started based on the retry value you enter, assuming the system detects any write activity to the primary disk at that time. Future watermark-triggered replications will not start until after a successful replication occurs. If you are using continuous replication and have set a watermark value, make sure that it is a value that can actually be reached; otherwise snapshots will
NSS Virtual Appliance User Guide 60

Replication

rarely be taken. Continuous replication does not take snapshots, but you will need a recent, valid snapshot if you ever need to rollback the replica to an earlier TimeMark during promotion. If you are using SafeCache, replication is triggered when the watermark value of data is moved from the cache resource to the disk. Start an initial replication on mm/dd/yyyy at hh:mm and then every n hours/minutes thereafter - Indicate when replication should begin and how often it should be repeated. If a replication is already occurring when the next time interval is reached, the new replication request will be ignored. Note: if you are using the FalconStor Snapshot Agent for Microsoft Exchange 5.5, the time between each replication should be longer than the time it takes to stop and then re-start the database. 6. Specify if you want to use the Throughput Control option.

Click Enable Throughput Control to control the synchronization process and maintain optimal resource throughput. This option can be used for questionable networks. The replication will be monitored every four minutes. If the replication takes longer than four minutes, the system will slow down replication to 10KB to avoid replication failure.

NSS Virtual Appliance User Guide

61

Replication

The Set Throughput Control policy screen displays.

This screen allows you to specify the interval at which the I/O activity is to be checked as well as the resume synchronization schedule. The default is set to check throughput activity every minute and only resume when the I/O activity per second is less than or equal to 20MB. The maximum checking attempts before resuming synchronization defaults to three (3). You can change the number of attempts or enter zero (0) to make the number of attempts unlimited. 7. Click Next once you have set the throughput policy. 8. Select whether you want to use TCP or RUDP as the protocol for this replication. Note: All new installations of NSS default to TCP.

NSS Virtual Appliance User Guide

62

Replication

9. Indicate which options you want to use for this device.

The Compression option provides enhanced throughput during replication by compressing the data stream. This leverages machines with multi-processors by using more than one thread for processing data compression/decompression during replication. By default, two (2) threads are used. The number can be increased to eight (8). This reduces the size of the transmission, thereby maximizing network bandwidth. Note: Compression requires 64K of contiguous memory. If the memory in the storage server is very fragmented, it will fail to allocate 64K. When this happens, replication will fail. The Encryption option provides an additional layer of security during replication by securing data transmission over the network. Initial key distribution is accomplished using the authenticated Diffie-Hellman exchange protocol. Subsequent session keys are derived from the master shared secret, making it very secure. Enable Microscan - Microscan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. If the global Microscan option is turned on, it overrides the Microscan setting for an individual virtual device. Also, if the virtual devices are in a group configured for replication, group policy always overrides the individual devices policy.

NSS Virtual Appliance User Guide

63

Replication

10. Select how you want to create the replica disk.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express automatically creates the replica for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the replica resource. Select Existing lets you select an existing resource. There are several restrictions as to what you can select: - The target must be the same type as the primary. - The target must be the same size as the primary. - The target can have Clients assigned to it but they cannot be connected during the replication configuration. Note: All data on the target will be overwritten.

NSS Virtual Appliance User Guide

64

Replication

If you select Custom, you will see the following windows:


Indicate the type of replica disk you are creating.

Select the storage pool or device to use to create the replica resource.

Only one disk can be selected at a time from this dialog. To create a replica disk from multiple physical disks, you will need to add the disks one at a time. After selecting the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space for the replica.

Indicate how much space to allocate from this disk.

Click Add More if you need to add another physical disk to this replica disk. You will go back to the physical device selection screen where you can select another disk.

NSS Virtual Appliance User Guide

65

Replication

11. Enter a name for the replica disk.

The name is not case sensitive. 12. Confirm that all information is correct and then click Finish to create the replication configuration. Notes: - Once you create your replication configuration, you should not change the hostname of the source (primary) server. If you do, you will need to recreate your replication configuration. - After the configuration is complete, the primary server will be added as a client on the replica server. We do not recommend assigning any resources to this client since its purpose is to be used for replication only. When will replication begin? If you have configured replication for an individual resource, the system will begin synchronizing the disks immediately after the configuration is complete if the disk is attached to a client and is receiving I/O activity. If you have configured replication for a group, synchronization will not start until one of the replication policies (time or watermark) is triggered. If you configured continuous replication If you are using continuous replication, you will be prompted to create a Continuous Replication Resource for the primary disk and a Snapshot Resource for the replica disk. If you are not using continuous replication, the wizard will only ask you to create a Snapshot Resource on the replica. Because old data blocks are moved to the Snapshot Resource as new data is written to the replica, the Snapshot Resource should be large enough to handle the amount of changed data that will be replicated. Since it is not always possible to know how much changed data will be replicated, it is a good idea for
NSS Virtual Appliance User Guide 66

Replication

you to enable expansion on the target servers Snapshot Resource. You then need to decide what to do if your Snapshot Resource runs out of space (reaches the maximum allowable size or does not have expansion enabled). The default is to stop writing data, meaning the system will prevent any new writes from getting to the disk once the Snapshot Resource runs out of space and it cannot allocate any more. Protect your replica resource For added protection, you can mirror or TimeMark an incoming replica resource by highlighting the replica resource and right-clicking on it.

Create a Continuous Replication Resource


This is needed only if you are using continuous replication. 1. Select the storage pool or physical device that should be used to create this Continuous Replication Resource.

NSS Virtual Appliance User Guide

67

Replication

2. Select how you want to create this Continuous Replication Resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each. Express lets you designate how much space to allocate and then automatically creates the resource using an available device. Note: The Continuous Replication Resource cannot be expanded. Therefore, you should allocate enough space for the resource. By default, the size will be 256 MB or 5% of the size of your primary disk (or 5% of the total size of all members of this group), whichever is larger. If the primary disk regularly experiences a large number of writes, or if the connection to the target server is slow, you may want to increase the size, because if the Continuous Replication Resource should become full, the system switches to delta replication mode until the next regularly-scheduled replication takes place. If you outgrow your resource, you will need to disable continuous replication and then re-enable it. 3. Verify the physical devices you have selected, confirm that all information is correct, and then click Finish. On the Replication tab, you will notice that the Replication Mode is set to Delta. Replication must be initiated once before it switches to continuous mode. You can either wait for the first scheduled replication to occur or you can right-click on your SAN Resource and select Replication --> Synchronize to force replication to occur.

NSS Virtual Appliance User Guide

68

Replication

Check replication status


There are several ways to check replication status: The Replication tab on the primary disk displays information about a specific resource. The Incoming and Outgoing objects under the Replication object display information about all replications to or from a specific server. The Event Log displays a list of replication information and errors. The Delta Replication Status Report provides a centralized view for displaying real-time replication status for all drives enabled for replication.

Replication tab
The following are examples of what you will see by checking the Replication tab for a primary disk:
With Continuous Replication enabled

With Delta Replication

NSS Virtual Appliance User Guide

69

Replication

All times shown on the Replication tab are based on the primary servers clock. Accumulated Delta Data is the amount of changed data. Note that this value will not display accurate results after a replication has failed. The information will only be accurate after a successful replication. Replication Status / Last Successful Sync / Average Throughput - You will only see these fields if you are connected to the target server. Transmitted Data Size is based on the actual size transmitted after compression or with Microscan performed. Delta Sent represents the amount of data sent (or processed) based on the uncompressed size. If compression and Microscan are not enabled, the Transmitted Data Size will be the same as Delta Sent and the Current/Average Transmitted Data Throughput will be the same as Instantaneous/Average Throughput. If compression or Microscan is enabled and the data can be compressed or blocks of data have not changed and will not be sent, the Transmitted Data Size is going to be different from Delta Sent and both Current/Average Transmitted Data Throughput will be based on the actual size of data (compressed or Micro-scanned) sent over the network.

Event Log
Replication events are also written to the primary servers Event Log, so you can check there for status and operational information, as well as any errors.

Replication object
The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the servers icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in".

NSS Virtual Appliance User Guide

70

Replication

Replication performance
Set global replication options You can set global replication options that affect system performance during replication. While the default settings should be optimal for most configurations, you can adjust the settings for special situations. To set global replication properties for a server: 1. Right-click on the server and select Properties. 2. Select the Performance tab. Default Protocol - Select the default protocol to use for replication jobs. Timeout replication after [n] seconds - Timeout after inactivity. This must be the same on both the primary and target replication servers. Note: This parameter can be affected by the TCP timeout setting. Throttle - The maximum amount of bandwidth that will be used for replication. Changing the throttle allows you to limit the amount of bandwidth replication will use. This is useful when the WAN is shared among many applications and you do not want replication traffic to dominate the link. This parameter affects all resources using either remote or local replication. Throttle does not affect manual replication scans; it only affects actual replication. It also does not affect continuous replication, which uses all available bandwidth. Leaving the Throttle field set to 0 (zero) means that the maximum available bandwidth will be used. Besides 0, valid input is 10-1,000,000 KB/s (1G). Enable Microscan - Microscan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. This global Microscan option overrides the Microscan setting for each individual virtual device. Tune replication parameters You can run a test to discover maximum bandwidth and latency for remote replication within your network. 1. Right-click on a server under Replication --> Outgoing and select Replication Parameters. 2. Click the Test button to find see information regarding the bandwidth and latency of your network.

NSS Virtual Appliance User Guide

71

Replication

Assign clients to the replica disk


You can assign Clients to the replica disk in preparation for promotion or reversal. Clients will not be able to connect to the replica disk and the Clients operating system will not see the replica disk until after the promotion or reversal. After the replica disk is promoted or a reversal is performed, you can restart the SAN Client to see the new information and connect to the promoted disk. To assign Clients: 1. Right-click on an incoming replica resource under the Replication object and select Assign. 2. Select the Client to be assigned and assign the appropriate access rights. If the Client you want to assign does not appear in the list, click the Add button. 3. Confirm all of the information and then click Finish to assign the Client.

Switch clients to the replica disk when the primary disk fails
Because the replica disk is used for disaster recovery purposes, clients do not have access to the replica. If a disaster occurs and the replica is needed, the administrator can promote the replica to become the primary disk so that clients can access it. The Promote option promotes the replica disk to a usable resource. Doing so breaks the replication configuration. Once a replica disk is promoted, it cannot revert back to a replica disk. You must have a valid replica disk in order to promote it. For example, if a problem occurred (such as a transmission problem or the replica disk failing) during the first and only replication, the replicated data would be compromised and therefore could not be promoted to a primary disk. If a problem occurred during a subsequent replication, the data from the Snapshot resource will be used to recreate the replica from its last good state. Notes: You cannot promote a replica disk while a replication is in progress. If you are using continuous replication, you should not promote a replica disk while write activity is occurring on the replica. If you just need to recover a few files from the replica, you can use the TimeMark/TimeView option instead of promoting the replica. Refer to Use TimeMark/TimeView to recover files from your replica for more information.

To promote a replica: 1. In the Console, right-click on an incoming replica resource under the Replication object and select Replication --> Promote. If the primary server is not available, you will be prompted to roll back the replica to the last good TimeMark, assuming you have TimeMark enabled on the replica. When this occurs, the wizard will not continue with the promotion and

NSS Virtual Appliance User Guide

72

Replication

you will have to check the Event Log to make sure the rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Promote to continue. 2. Confirm the promotion and click OK. 3. Assign the appropriate clients to this resource. 4. Rescan devices or restart the client to see the promoted resource.

Recreate your original replication configuration


Your original primary disk became unusable due to a disaster and you have promoted the replica disk to a primary disk so that it can service your clients. You have now fixed, rebuilt, or replaced your original primary disk. Do the following to recreate your original replication configuration: 1. From the current primary disk, run the Replication Setup wizard and create a configuration that replicates from the current resource to the original primary server. Make sure a successful replication has been performed to synchronize the data after the configuration is completed. If you select the Scan option, you must wait for this to complete before running another scan or replication. 2. Assign the appropriate clients to the new replica resource. 3. Detach all clients from the current primary disk. - For Unix clients, type ./ipstorclient stop from /usr/local/ ipstorclient/bin. 4. Right-click on the appropriate primary resource or replica resource and select Replication --> Reversal to switch the roles of the disks. Afterwards, the replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is maintained but clients will be disconnected from the former primary disk. For more information, refer to Reverse a replication configuration.

NSS Virtual Appliance User Guide

73

Replication

Use TimeMark/TimeView to recover files from your replica


While the main purpose of replication is for disaster recovery purposes, the TimeMark feature allows you to access individual files on your replica without needing to promote the replica. This can be useful when you need to recover a file that was deleted from the primary disk. You can simply create a TimeView of the replica, assign it to a client, and copy back the needed file. Using TimeMark with a replica is also useful for what if scenarios, such as testing a new application on your actual, but not live, data. In addition, using HyperTrac Backup with Replication and TimeMark allows you to back up your replica at your disaster recovery site without impacting any application servers. For more information about using TimeMark and HyperTrac, refer to your HyperTrac Backup Accelerator User Guide.

Change your replication configuration options


You can change the following for your replication configuration: Static IP address of a remote target server Policies that trigger replication (watermark, interval, time) Replication protocol Use of compression, encryption, or microscan Replication mode

To change the configuration: 1. Right-click on the primary disk and select Replication --> Properties. 2. Make the appropriate changes and click OK. Note: - If you are using continuous replication and you enable or disable encryption, the change will take effect after the next delta replication. - If you are using continuous replication and you change the IP address of your target server, replication will switch to delta replication mode until the next regularly-scheduled replication takes place.

NSS Virtual Appliance User Guide

74

Replication

Suspend/resume replication schedule


You can suspend future replications from automatically being triggered by your replication policies (watermark, interval, time) for an individual virtual device. Once suspended, all of the devices replication policies will be put on hold, preventing any future policy-triggered replication from starting. This will not stop a replication that is currently in progress and you can still manually start the replication process while the schedule is suspended. When replication is resumed, replication will start at the normally scheduled interval based on the devices replication policies. To suspend/resume replication: 1. Right-click on the primary disk and select Replication --> Suspend (or Resume). You can see the current settings by checking the Replication Schedule field on the Replication tab of the primary disk.

Stop a replication in progress


You can stop a replication that is currently in progress. To stop a replication: 1. Right-click on the primary disk and select Replication --> Stop.

Manually start the replication process


To force a replication that is not scheduled, select Replication --> Synchronize. Note: If replication is already occurring, this request will fail.

NSS Virtual Appliance User Guide

75

Replication

Reverse a replication configuration


Reversal switches the roles of the replica disk and the primary disk; The replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is maintained. After the reversal, clients will be disconnected from the former primary disk. To perform a role reversal, right-click on the appropriate primary resource or replica resource and select Replication --> Reversal. Notes: The primary and replica must be synchronized in order to reverse a replica. If needed, you can manually start the replication from the Console and re-attempt the reversal after the replication is completed. If you are using continuous replication, you have to disable it before you can perform the reversal. If you are performing a role reversal on a group, we recommend that the group have 40 or fewer resources. If there are more than 40 resources in a group, we recommend that multiple groups be configured to accomplish this task.

Reverse a replica when the primary is not available


Replication can be reversed from the replica server side even if the primary server is offline or is not accessible. When you reverse this type of replica, the replica disk will be promoted to become the primary disk and the replication configuration will be removed. Afterwards, when the original primary server becomes available, you must repair the replica in order to re-establish a replication configuration. Note: If a primary disk is in a group but the group doesnt have replication enabled, the primary resource should leave the group first before the repair replica can be performed.

Forceful role reversal


When the primary server is down and the replica is up. Or if the primary server is up but corrupted and the replica is not synchronized, you can force a role reversal as long as there are no replication processes running. To perform a forceful role reversal: 1. Suspend the replication schedule. If you are using Continuous Mode, disable it by right-clicking on the disk and selecting Replication --> Properties and uncheck Continuous Mode in the

NSS Virtual Appliance User Guide

76

Replication

replication transfer Mode and TimeMark tab under the Replication Setup Options. 2. Right-click on the primary or replica server and select Replication --> Forceful Reversal. 3. Type YES to confirm the operation and then click OK. 4. Once the forceful role reversal is done, Repair the promoted replica to establish the new connection between the new primary and replica server. The replication repair operation must be performed from the NEW primary server. Note: If the SAN Resource is assigned to a client in the original primary server, it must be unassigned in order to perform the repair on the new primary. 5. Confirm the IP address and click OK. The current primary disk remains as the primary disk and begins replicating to the recovered server. After the repair operation is complete, replication will synchronize again either by schedule or manual trigger. A full synchronization is performed if the replication was not synchronized prior the forceful role reversal and the replication policy from the original primary server will be used/update on the new primary server. If you want to recreate your original replication configuration, you will need to perform another reversal so that your original primary becomes the primary disk again. Notes: The forceful role reversal operation can be performed even if the CDP journal has unflushed data. The forceful role reversal operation can be performed even if data is not synchronized between the primary and replica server. The snapshot policy, TimeMark/CDP, and throttle control policy settings are not swapped after the repair operation for replication role reversal.

Relocate a replica
The Relocate feature allows replica storage to be moved from the original replica server to another server while preserving the replication relationship with the primary server. Relocating reassigns ownership to the new server and continues replication according to the set policy. Once the replica storage is relocated to the new server, the replication schedule can be immediately resumed without the need to rescan the disks.

NSS Virtual Appliance User Guide

77

Replication

Before you can relocate the replica, you must import the disk to the new NSS appliance. Refer to the NSS Reference Guide for additional information. Once the disk has been imported, open the source server, highlight the virtual resource that is being replicated, right-click and select Relocate. Notes: You cannot relocate a replica that is part of a group. If you are using continuous replication, you must disable it before relocating a replica. Failure to do so will keep replication in delta mode, even after the next manual or scheduled replication occurs. You can reenable continuous replication after relocating the replica.

Remove a replication configuration


Right-click on the primary disk and select Replication --> Disable. This allows you to remove the replication configuration on the primary and either delete or promote the replica disk on the target server at the same time.

Expand the size of the primary disk


The primary disk and the replica disk must be the same size. If you expand the primary disk, you will enlarge the replica disk to the same size. Note: Do not attempt to expand the primary disk during replication. If you do, the disk will expand but the replication will fail,

Replication with other NSS features


Replication and TimeMark
The timestamp of a TimeMark on a replica is the timestamp of the source. If you enable TimeMark/CDP on the replica side, you cannot create any TimeMarks.

Replication and Failover


If replication is in progress and a failover occurs at the same time, the replication will stop. After failover, replication will start at the next normally scheduled interval. This is also true in reverse, if replication is in progress and a recovery occurs at the same time.

Replication and Mirroring


When you promote the mirror of a replica resource, the replication configuration is maintained.
NSS Virtual Appliance User Guide 78

Replication

Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.

Replication and Thin Provisioning


A disk with thin provisioning enabled can be configured to replicate to a normal SAN resource or another disk with thin provisioning enabled. The normal SAN Resource can replicate to a disk with thin provisioning as long as the size of the SAN Resource is equal to or greater than 10GB (the minimum permissible size of the thin disk).

NSS Virtual Appliance User Guide

79

NSS Virtual Appliance User Guide

Troubleshooting
NSS Virtual Appliance settings
FalconStor NSS Virtual Appliance settings can be verified as described below:

Checking the resource reservation


Once you have installed the NSS Virtual Appliance, you are ready to configure the resource reservation. If you installed using the installation script method, the resource reservation will be set automatically. If you installed using the virtual appliance import method, you will need to do this manually. It is important that you make sure the NSSVA has enough resources, especially in the shared architecture with the other virtual machine. To allocate resources: 1. Launch the VMware Infrastructure/vSphere Client and connect to the ESX server with root privileges. 2. Right-click on the installed FalconStor-NSSVA and click Edit Settings. 3. On the NSSVA Virtual Machine Properties screen, select the Resources setting tab, then highlight CPU under the settings list.
4. On the Resource Allocation screen for the CPU, enter 2000 as the Reservation setting.

5. Click Memory under the settings list and enter 1024 of memory in the Reservation setting in the Resource Allocation pane.

NSS Virtual Appliance User Guide

80

Troubleshooting

Checking the virtual Network Adapter setting


The FalconStor NSS Virtual Appliance is pre-configured with two virtual network adapters in the virtual machine setting. You will need to connect both VEAs to the correct network connection. For high availability architecture, the secondary VEA must connect to an independent virtual switch with a crossover cable on the physical network adapter linked to partner ESX server. 1. Right-click the installed FalconStor-NSSVA then click the Edit Settings. 2. On the NSSVA Virtual Machine Properties, select the Hardware tab, then click the Network Adapter 1 under the hardware list. 3. On Network Connection setting of the selected network adapter, click the Network label setting to select the correct virtual machine connection from the list. 4. Click Network Adapter 2 and select the right virtual machine connection from the list. The Hardware tab showing the network connection setting of the virtual network adapter is displayed below.

NSS Virtual Appliance User Guide

81

Troubleshooting

Optimizing SCSI software initiator performance


iSCSI performance can be optimized on an ESX host by modifying the configuration of the network and separating network traffic. For example, when you have an ESX server with virtual machine traffic, VMotion traffic, and iSCSI traffic, consider separating the network traffic as follows:

vSwitch1 for virtual machines vSwitch2 for VMotion vSwitch3 for iSCSI

The general recommendation from VMware is to separate the vSwitch so that only one vSwitch has iscsi traffic and general network traffic. Refer to the VMware Knowledge Base article at: http://kb.vmware.com/selfservice/ microsites/search.do?language=en_US&cmd=displayKC&externalId=1001251

Optimizing performance when using a virtual disk on a NSSVA for iSCSI devices
You can allocate EagerZeroedThick disk at the creation of the virtual disk to optimize performance. Notes: A thick-eager zeroed disk has all the space allocated and zeroed out at the time of creation. It is a time consuming process. Do not set eagerzerothick to both NSSVA's system/data vmdks and guest VM's vmdks. Do not enable Fault Tolerance on the either Guest this will create a ghost system on the ESX HA pair that will be written to simultaneously over the LAN and will impact performance.

Resolving slow performance on the Dell PERC6i


There is a performance issue specific to the Dell machine with a Perc 6i RAID controller.enabling write cache (-> Write Back on a Perc). It is recommended that you use a battery-backed write cache to improve the Perc performance. Not having cache battery doesn't mean that you can't enable write cache for better performance but officially Dell doesn't recommend you enable write cache if you don't have battery in your Perc6i controller. This is because in the event of a power outage, the lack of cache battery will cause the data on the controller cache not be flushed to the disk and this will result in data loss and possible corruption. The battery will allow enough time to enable the controller to flush the data to the disk avoiding lost of data.

NSS Virtual Appliance User Guide

82

Troubleshooting

To enable write cache without a battery, you need to modify the Bios. Go to the virtual disk settings and select Advanced settings > Write policy (choose write back) > select checkbox "force WB with no battery". Consult with Dell regarding the risk of this configuration to confirm that your perc card has a battery.

Cross-mirror failover
Symptom: During cross-mirror configuration, the system reports a mismatch of physical disks on the two appliances even though you are sure that the configuration of the two appliances is exactly the same, including the ACSL, disk size, CPU and memory. Cause/Resolution: An iSCSI initiator must be installed on the storage server and is included on FalconStor cross-mirror appliances. If you are not using a FalconStor cross-mirror appliance, you must install the iSCSI initiator RPM from the Linux CD before running the IPStorinstall installation script. The script will update the initiator.

NSS Virtual Appliance User Guide

83

NSSVA User Guide

Appendix A - Checklist
A. VMware ESX Server system configuration
VMware ESX Server system configuration check list What to check The primary ESX server first virtual switch name: How to check Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking -> first Virtual Switch Example: vSwitch0 Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking -> second Virtual Switch Example: vSwitch1 Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking. Select Properties on vSwitch0. Then select Service Console and verify the IP displayed in the right panel. Connect to the primary ESX server via the VMware vSphere client. On the console, go to Configuration -> Networking. elect Properties on vSwitch1. Then select Service Console and verify the IP displayed in the right panel To get IP, the user has to manually add VMKernel connection type first by select the vSwitch and click Properties and click Add button to add a VMKernel IP. After this, get the IP via the VMware vSphere client. On the console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select VMKernel, The IP will be showed on right panel. Value

The primary ESX server second virtual switch name:

The primary ESX server service console IP on vSwitch0:

The primary ESX server service console IP on vSwitch1:

The primary ESX server VMKernel IP on vSwitch0:

NSSVA User Guide

84

Appendix A - Checklist

VMware ESX Server system configuration check list What to check The secondary ESX server 1st virtual switch name: How to check Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> The first Virtual Switch Example: vSwitch0 Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> The second Virtual Switch Example: vSwitch1 Connect to the secondary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select Service Console, The IP will be showed on right panel. Check from ESX console -> configuration > Networking -> Click Properties on vSwitch1 -> Select Service Console, The IP will be showed on right panel. To get IP, the user has to manually add VMKernel connection type first by select the vSwitch and click Properties and click Add button to add a VMKernel IP. After this, get the IP via the VMware vSphere client. On the console, go to Configuration -> Networking -> Click Properties on vSwitch0 -> Select VMKernel, The IP will be showed on right panel. Value

The secondary ESX server 2nd virtual switch name:

The secondary ESX server service console IP on vSwitch0:

The secondary ESX server service console IP on vSwitch1:

The secondary ESX server VMKernel IP on vSwitch0:

NSSVA User Guide

85

Appendix A - Checklist

B. NSS Virtual Appliance system information


NSS Virtual Appliance system information check list What to check The primary NSSVA build number: How to check Select VA machine on the FalconStor console -> Click Summary on right panel > Check the build number on the Annotations According to user guide P.18 - step3. It is now IPStor101. Select VA machine on the FalconStor console -> Click Summary -> Check the IP address on General item. Select VA machine on the FalconStor console -> Click Summary -> Check the IP address on General item -> View All Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 1 -> Set the network connection -> Network label Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 2 -> Set the network connection -> Network label Select VA machine on the FalconStor console -> Click Summary on right panel > Check the build number on the Annotations According to user guide P.18 - step3. It is now IPStor101. Select VA machine on the FalconStor console -> Click Summary -> Check the IP address on General item. Select VA machine on the FalconStor console -> Click Summary -> Check the IP address on General item -> View All Value

The primary NSSVA root password: The primary NSSVA eth0 IP address:

The primary NSSVA eth1 IP address:

The primary NSSVA eth0 virtual machine network:

The primary NSSVA eth1virtual machine network:

The secondary NSSVA build number:

The secondary NSSVA root password: The secondary NSSVA eth0 IP address:

The secondary NSSVA eth1 IP address:

NSSVA User Guide

86

Appendix A - Checklist

NSS Virtual Appliance system information check list What to check The secondary NSSVA eth0 virtual machine network: How to check Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 1 -> Set the network connection -> Network label Right-click on the VA machine -> Edit Setting -> Hardware -> Select Network adapter 2 -> Set the network connection -> Network label The user account should mean "root" account. If they're not equal, the failover cannot be set successfully Check from FalconStor console->Connect to secondary NSSVA ->SAN Clients Check from FalconStor console->Connect to primary NSSVA -> Logical Resources -> SAN Resources Value

The secondary NSSVA eth1virtual machine network:

The user account and password must be equal on both NSSVA No created SAN client on the secondary NSSVA No created SAN resource on the primary NSSVA

NSSVA User Guide

87

Appendix A - Checklist

C. Network Configuration
Network Configuration check list What to check Make sure the primary ESX server installed two physical network adapters and link to independent virtual switches. How to check Connect to the primary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking and find the VM network of vSwitch0 and vSwitch1. Two Physical Adapters should be connected separately. Example: vmnic0 & vmnic1 Make sure the secondary ESX server installed two physical network adapters and link to independent virtual switches Connect to the primary ESX server via the VMware vSphere client. From the ESX console, go to Configuration -> Networking -> Find the VM network of vSwitch0 and vSwitch1 -> Two Physical Adapters should be connected separately. Example: vmnic0 & vmnic1 Make sure the the crossover cable connects the 2nd physical network adapter of the primary and secondary ESX server Connect to the secondary ESX server via the VMware vSphere Client. Go to Configuration -> Networking -> Virtual Switch: vSwitch1 -> The 2nd physical network adapter (vmnic1) should be connected with vSwitch1 For 1. & 2, use SSH to login to the Primary ESX server and ping the service console & VMkernel IP of secondary ESX server on 1st virtual switch. For 3, use SSH to login to the Primary NSSVA and ping the eth0 IP of the secondary NSSVA. Value

Make sure the IP address of the following items are set in the same IP subnet and can ping each other: 1. The IP address of the service console on 1st virtual switch in the primary and secondary ESX server 2. The IP address of the VMkernel on 1st virtual switch in the primary and secondary ESX server 3. The IP address of the eth0 in the primary and secondary NSSVA 4. The IP address of the client access IP address in Cross-Mirror setting.

NSSVA User Guide

88

Appendix A - Checklist

Network Configuration check list What to check Make sure the IP address of the following items are set in the same IP subnet and can ping each other: 1. The IP address of the service console on the 2nd virtual switch in the primary and the secondary ESX server 2. The IP address of the eth1 in the primary and secondary NSSVA 3. The IP addresses list for Cross-Mirror setting: The heart-beat IP of the primary NSSVA (eth0): New IP address The Cross-Mirror IP of the primary NSSVA (eth1) The Cross-Mirror IP of the Secondary NSSVA (eth1) The client access IP for crossmirror access: Original primary eth0 IP How to check For 1. Use SSH to login to Primary ESX server. Ping the service console and VMkernel IP of secondary ESX server on 2nd virtual switch. For 2. & 3. Use SSH to login to Primary NSSVA. Ping the eth0 IP of secondary NSSVA. The IP will be set to monitor primary health during setup Failover The heart-beat IP will be created during setting Failover. This IP should be the same subnet as eth0. The Cross-Mirror IP of primary/secondary NSSVA should be the NSSVA eth1 IP address according to B. NSS Virtual Appliance system information on page 86. Value

D. Storage Configuration
Storage Configuration check list What to check The category of all devices in the secondary NSSVA is set in "un-assigned". How to check Check the Storage Configuration via the FalconStor Management Console. Go to Physical Resources.->Physical Devices>SCSI Devices. Check the devices info via FalconStor Management Console. Go to Physical Resources -> select SCSI Devices on the right panel. Using the FalconStor Management Console, connect to the NSSVA. Navigate to Physical Resources -> Physical Devices -> SCSI Devices. You should see at least 10GB of free space. Value

The devices in the secondary NSSVA can create one to one mapping to the devices in the primary NSSVA.They have the same size and SCSI ID. 10 GB free space is available on the primary NSSVA to create the configuration repository during the Cross-Mirror configuration.

NSSVA User Guide

89

Index

Index
C
Compression Replication 63 Configuration 19 Console Register keycodes 21 console 16 Continuous replication 66 Enable 58 Resource 67, 68 Cross mirror Check resources & swap 48 Recover from disk failure 47 Requirements 6 Re-synchronize 48 Verify & repair 48 Replication note 78 Requirements Cross mirror 6 Server changes 54 Subnet change 54 Suspend/resume 55 failover configuration 43 FalconStor Management Console 2 FalconStor Virtual Appliance Setup utility 16 Force takeover 43

G
Global options 71

H
Hardware tab 81 Health monitoring 41 high availability (HA) 1

D
Datastore 13 Delta Mode 58 Delta Replication Status Report 69 Disaster recovery Replication 56

I
Installation Snapshot Agent 15

K E
Encryption Replication 63 Keycodes Register 21 Knowledge requirements 11

F
Failover Auto Recovery 46 Auto recovery 46 Cross mirror Check resources & swap 48 Recover from disk failure 47 Requirements 6 Re-synchronize 48 Verify & repair 48 Fix failed server after failover 46 Force a takeover 54 Manually initiate a recovery 55 Physical device change 54 Recovery 46 Remove configuration 55

L
Local Replication 56

M
Microscan 63, 71 Mirroring Replication note 78

N
Network Mapping 13 NSS Virtual Appliance 2

P
Performance Replication 71

90

Index

Power Control Test 44 Power control utility 42 Primary ESX server connection 43 Primary ESX server root password 43 Primary NSSVA network test 44

TimeMark/TimeView 74 Timeout 71 Reports Delta Replication Status 69 Resource Allocation 80

R
Relocate a replica 77 Remote Replication 56 Replica resource Protect 67 Replication 56, 71 Assign clients to replica disk 72 Change configuration options 74 Compression 63 Configuration 56 Continuous replication resource 67 Delta mode 58 Encryption 63 Expand primary disk 78 Failover note 78 First replication 66 Force 75 Microscan 63, 71 Mirroring note 78 Performance 71 Parameters 71 Policies 60 Primary disk 56 Promote 72 Recover files 74 Recreate original configuration 73 Relocate replica 77 Remove configuration 78 Replica disk 56 Requirements 56 Resume schedule 75 Reversal 73, 76 Scan 73 Setup 57 Start manually 75 Status 69 Stop in progress 75 Suspend schedule 75 Switch to replica disk 72 Synchronize 68, 75 Test 71 Throttle 71 TimeMark note 78

S
SafeCache 61 SAN Disk Manager 2 Secondary ESX server connection 43 Security 84 Snapshot Agents 2

T
Thin Provisioning 1, 56 Thin Replication 1 Throttle 71 Throughput Control enable 61 set policy 62 TimeMark Replication note 78

V
vapwc-config 43 virtual iSCSI SAN 1 VMware Infrastructure Client 16

W
watermark value 60

91

Вам также может понравиться