Вы находитесь на странице: 1из 20

1

SVM
What SVMs are
Storage Virtual Machines (SVMs, formerly known as Vservers)
contain data volumes and one or more LIFs through which they
serve data to the clients. Starting with clustered Data ONTAP
8.1.1,
SVMs can either contain one or more FlexVol volumes, or a single
Infinite Volume.
SVMs securely isolate the shared virtualized data storage and
network, and each SVM appears as a single dedicated server to
the clients. Each SVM has a separate administrator authentication
domain and can be managed independently by its SVM
administrator.

In a cluster, SVMs facilitate data access. A cluster must have at


least one SVM to serve data. SVMs use the storage and network
resources of the cluster. However, the volumes and LIFs are
exclusive to the SVM. Multiple SVMs can coexist in a single cluster
without being bound to any node in a cluster.
However, they are bound to the physical cluster on which they
exist.
A cluster can have one or more SVMs with FlexVol volumes and
SVMs with Infinite Volume.

Each SVM with FlexVol volumes in a NAS environment presents a


single directory hierarchical view and has a unique namespace.
The namespace enables NAS clients to access data without
specifying the physical location of the data. The namespace also
enables the cluster and SVM administrators to
manage distributed data storage as a single directory with
multiple levels of hierarchy.

The volumes within each NAS SVM are related to each other
through junctions and are mounted on junction paths. These
junctions present the file system in each volume. The root volume
of the SVM is a FlexVol volume that resides at the top level of the
namespace hierarchy; additional volumes are mounted to the
SVM root volume to extend the namespace. As volumes are
2

created for the SVM, the root volume of the SVM contains junction
paths.
SVMs with FlexVol volumes can contain files and LUNs. They
provide file-level data access by using NFS and CIFS protocols for
the NAS clients, and block-level data access by using iSCSI and
Fibre Channel (FC) (FCoE included) for SAN hosts.

SVMs with Infinite Volume can contain only one Infinite Volume to
serve data. Each SVM with Infinite Volume includes only one
junction path, which has a default value of /NS. The junction
provides a single mount point for the large namespace provided
by the SVM with Infinite Volume.
You cannot add more junctions to an SVM with Infinite Volume.
However, you can increase the size of the Infinite Volume.

SVMs with Infinite Volume can contain only files. They provide file-
level data access by using NFS and CIFS protocols. SVMs with
Infinite Volume cannot contain LUNs and do not provide block-
level data access.

Managing SVMs
Storage Virtual Machine (SVM) administrators can administer
SVMs and its resources, such as volumes, protocols, and services,
depending on the capabilities assigned by the cluster
administrator. SVM administrators cannot create, modify, or
delete SVMs.

Note: SVM administrators cannot log in to System Manager.

SVM administrators might have all or some of the following


administration capabilities:
Data access protocol configuration
SVM administrators can configure data access protocols, such as
NFS, CIFS, iSCSI, and Fibre Channel (FC) protocol (Fibre Channel
over Ethernet or FCoE included).
Services configuration
3

SVM administrators can configure services such as LDAP, NIS, and


DNS.
Storage management
SVM administrators can manage volumes, quotas, qtrees, and
files.
LUN management in a SAN environment
Management of Snapshot copies of the volume
Monitoring SVM
SVM administrators can monitor jobs, network connection,
network interface, and the SVM health.

Types of SVMs
A cluster consists of three types of SVMs, which help in managing
the cluster and its resources and data access to the clients and
applications.
A cluster contains the following types of SVMs:
Admin SVM
The cluster setup process automatically creates the admin SVM
for the cluster. The admin SVM represents the cluster.
Node SVM
A node SVM is created when the node joins the cluster, and the
node SVM represents the individual nodes of the cluster.
System SVM (advanced)
A system SVM is automatically created for cluster-level
communications in an IPspace.
Data SVM
A data SVM represents the data serving SVMs. After the cluster
setup, a cluster administrator must create data SVMs and add
volumes to these SVMs to facilitate data access from the cluster.
A cluster must have at least one data SVM to serve data to its
clients.

Note: Unless otherwise specified, the term SVM refers to data


(data-serving) SVM, which applies to both SVMs with FlexVol
volumes and SVMs with Infinite Volume.
In the CLI, SVMs are displayed as Vservers.

Why you use SVMs


4

Storage Virtual Machines (SVMs, formerly known as Vservers)


provide data access to clients regardless of the physical storage
or controller, similar to any storage system. SVMs provide benefits
such as nondisruptive operations, scalability, security, and unified
storage.

SVMs provide the following benefits:


Multi-tenancy
SVM is the fundamental unit of secure multi-tenancy, which
enables partitioning of the storage infrastructure so that it
appears as multiple independent storage systems. These
partitions isolate the data and management.

Nondisruptive operations
SVMs can operate continuously and nondisruptively for as long as
they are needed. SVMs help clusters to operate continuously
during software and hardware upgrades, addition and removal of
nodes, and all administrative operations.

Scalability
SVMs meet on-demand data throughput and the other storage
requirements.

Security
Each SVM appears as a single independent server, which enables
multiple SVMs to coexist in a cluster while ensuring no data flows
among them.

Unified storage
SVMs can serve data concurrently through multiple data access
protocols. SVMs provide filelevel data access through NAS
protocols, such as CIFS and NFS, and block-level data access
through SAN protocols, such as iSCSI and FC (FCoE included).
SVMs can serve data to SAN and NAS clients independently at the
same time.

Note: SVMs with Infinite Volume can serve data only through NFS
and CIFS protocols.
5

Delegation of management
Each SVM can have its own user and administration
authentication. SVM administrators can manage the SVMs that
they are authorized to access. However, SVM administrators have
privileges assigned by the cluster administrators.

Easy management of large datasets


With SVMs with Infinite Volume, management of large and
unstructured data is easier because the SVM administrator can
manage one data container instead of many.

What a cluster is
A cluster consists of one or more nodes grouped together as (HA
pairs) to form a scalable cluster. Creating a cluster enables the
nodes to pool their resources and distribute work across the
cluster, while presenting administrators with a single entity to
manage. Clustering also enables continuous service to end users
if individual nodes go offline.

The maximum number of nodes within a cluster depends on the


platform model and licensed protocols.

Each node in the cluster can view and manage the same
volumes as any other node in the cluster. The total file-system
namespace, which comprises all of the volumes and their
resultant paths, spans the cluster.

The nodes in a cluster communicate over a dedicated,


physically isolated and secure Ethernet network.
The cluster logical interfaces (LIFs) on each node in the cluster
must be on the same subnet.

When new nodes are added to a cluster, there is no need to


update clients to point to the new nodes. The existence of the
new nodes is transparent to the clients.

If you have a two-node cluster (a single HA pair), you must


configure cluster high availability (HA).
6

You can create a cluster on a stand-alone node, called a single-


node cluster.
This configuration does not require a cluster network, and enables
you to use the cluster ports to serve data traffic. However,
nondisruptive operations are not supported on single-node
clusters.

Differences between cluster and SVM


Administrators

Cluster administrators administer the entire cluster and the


Storage Virtual Machines (SVMs, formerly known as Vservers) it
contains. SVM administrators administer only their own data
SVMs.

Cluster administrators can administer the entire cluster and its


resources. They can also set up data SVMs and delegate SVM
administration to SVM administrators. The specific capabilities
that cluster administrators have depend on their access-control
roles. By default, a cluster administrator with the admin account
name or role name has all capabilities for managing the cluster
and SVMs.
SVM administrators can administer only their own SVM storage
and network resources, such as volumes, protocols, LIFs, and
services. The specific capabilities that SVM administrators have
depend on the access-control roles that are assigned by cluster
administrators.

What a node in the cluster is


A node is a controller in a cluster. It is connected to other nodes in
the cluster over a private management cluster network. It is also
7

connected to the disk shelves that provide physical storage for


the Data ONTAP system or to third-party storage arrays that
provide array LUNs for Data ONTAP use.

A node Storage Virtual Machine (SVM) represents a node in the


cluster. The cluster setup process automatically creates a node
SVM for each node in the cluster.

What an Infinite Volume is


An Infinite Volume is a single, scalable volume that can store up
to 2 billion files and tens of petabytes of data.
With an Infinite Volume, you can manage multiple petabytes of
data in one large logical entity and clients can retrieve multiple
petabytes of data from a single junction path for the entire
volume.
An Infinite Volume uses storage from multiple aggregates on
multiple nodes. You can start with a small Infinite Volume and
expand it nondisruptively by adding more disks to its aggregates
or by providing it with more aggregates to use

What LIFs are


A LIF (logical interface) is an IP address or WWPN with associated
characteristics, such as a role, a home port, a home node, a list of
ports to fail over to, and a firewall policy. You can configure LIFs
on ports over which the cluster sends and receives
communications over the network.
LIFs can be hosted on the following ports:
Physical ports that are not part of interface groups
Interface groups.
VLANs
Physical ports or interface groups that host VLANs
While configuring SAN protocols such as FC on a LIF, it will be
associated with a WWPN.

Roles for LIFs


A LIF role determines the kind of traffic that is supported over the
LIF, along with the failover rules that apply and the firewall
restrictions that are in place. A LIF can have any one of the five
roles:
8

node management, cluster management, cluster, intercluster,


and data.
node management LIF

A LIF that provides a dedicated IP address for managing a


particular node in a cluster.

Node management LIFs are created at the time of creating or


joining the cluster. These

LIFs are used for system maintenance, for example, when a node
becomes inaccessible from the cluster.

cluster management LIF


A LIF that provides a single management interface for the entire
cluster.
A cluster-management LIF can fail over to any node-management
or data port in the cluster. It cannot fail over to cluster or
intercluster ports.

cluster LIF
A LIF that is used to carry intracluster traffic between nodes in a
cluster. Cluster LIFs must always be created on 10-GbE network
ports.
Cluster LIFs can fail over between cluster ports on the same node,
but they cannot be migrated or failed over to a remote node.
When a new node joins a cluster, IP addresses are generated
automatically. However, if you want to assign IP addresses
manually to the cluster LIFs, you must ensure that the new IP
addresses are in the same subnet range as the existing cluster
LIFs.

data LIF
A LIF that is associated with a Storage Virtual Machine (SVM) and
is used for
communicating with clients. You can have multiple data LIFs on a
port. These interfaces can migrate or fail over throughout the
9

cluster. You can modify a data LIF to serve as an SVM


management LIF by modifying its firewall policy to mgmt.
Sessions established to NIS, LDAP, Active Directory, WINS, and
DNS servers use data LIFs.

intercluster LIF
A LIF that is used for cross-cluster communication, backup, and
replication. You must create an intercluster LIF on each node in
the cluster before a cluster peering relationship can be
established.
These LIFs can only fail over to ports in the same node. They
cannot be migrated or failed over to another node in the cluster.

Guidelines for creating LIFs


There are certain guidelines that you should consider before
creating a LIF.
Each Storage Virtual Machine (SVM) must have at least one SVM
anagement LIF that is configured to route to external services,
such as DNS, LDAP, Active Directory, NIS, and so on. An SVM
management LIF can be configured to either serve data and route
to external services
(protocol=data, firewall-policy=mgmt) or only route to external
services (protocol=none, firewallpolicy=mgmt).
FC LIFs can be configured only on FC ports; iSCSI LIFs cannot
coexist with any other protocols.
NAS and SAN protocols cannot coexist on the same LIF.

Difference between 7G and Cluster-mode:


===============================
10

Junction Path is a new term in cluster mode and this is used for
mounting.
junction paths are where the individual volumes in ONTAP are
placed.
The junction path must start with the root (/) and can contain both
directories and junctioned volumes. The junction path does not
need to contain the name of the volume. Junction paths are
independent of the volume name.
The junction path is case insensitive; /ENG is the same as /eng.

You can create a data volume without specifying a junction point.


The resultant volume is not automatically mounted, and is not
available to configure for NAS access. You must mount the volume
before you can configure SMB shares or NFS exports for that
volume.

global namespace
Global namespace is a feature that simplifies storage
management in environments that have numerous physical file
systems.

A global namespace provides a consolidated view into multiple


Network File Systems (NFS), Common Internet File Systems
(CIFS), network-attached storage (NAS) systems or file servers
that are in different physical locations. This is particularly
beneficial in distributed implementations with unstructured data
and in environments that are growing quickly so that data can be
accessed without needing to know where it physically resides.
Without a global namespace, these multiple file systems would
have to be managed separately
11

What a Vserver is

A virtual storage server (Vserver) contains data volumes and one or more LIFs through which it
serves data to the clients. Starting with clustered Data ONTAP 8.1.1, a Vserver can either contain
one or more FlexVol volumes, or a single Infinite Volume.

A Vserver securely isolates the shared virtualized data storage and network, and appears as a
single dedicated server to its clients. Each Vserver has a separate administrator authentication
domain and can be managed independently by a Vserver administrator.

In a cluster, Vserver facilitates data access. A cluster must have at least one Vserver to serve data.
Vservers use the storage and network resources of the cluster. However, the volumes and LIFs
are exclusive to the Vserver. Multiple Vservers can coexist in a single cluster without being
bound to any node in a cluster. However, they are bound to the physical cluster on which they
exist.

A cluster can have one or more Vservers with FlexVol volumes and Vservers with Infinite
Volumes.

Vserver with FlexVol volumes


A Vserver with FlexVol volumes in a NAS environment presents a single directory hierarchical
view and has a unique namespace. Namespace enables the NAS clients to access data without
specifying the physical location of the data. Namespace also enables the cluster and Vserver
administrators to manage distributed data storage as a single directory with multiple levels of
hierarchy.

The volumes within each NAS Vserver are related to each other through junctions and are
mounted on junction paths. These junctions present the file system in each volume. The root
volume of a Vserver is a FlexVol volume that resides at the top level of the namespace hierarchy;
additional volumes are mounted to the Vserver's root volume to extend the namespace. As
volumes are created for the Vserver, the root volume of a Vserver contains junction paths.

A Vserver with FlexVol volumes can contain files and LUNs. It provides file-level data access by
using NFS and CIFS protocols for the NAS clients, and block-level data access by using iSCSI,
and Fibre Channel (FC) protocol (FCoE included) for SAN hosts.

Vserver with Infinite Volume


A Vserver with Infinite Volume can contain only one Infinite Volume to serve data. A Vserver
with Infinite Volume includes only one junction path, which has a default value of /NS. The
junction provides a single mount point for the large namespace provided by the Vserver with
Infinite Volume. You cannot add more junctions to a Vserver with Infinite Volume. However, you
can increase the size of the Infinite Volume.
12

A Vserver with Infinite Volume can contain only files. It provides file-level data access by using
NFS and CIFS (SMB 1.0) protocols. A Vserver with Infinite Volume cannot contain LUNs and
does not provide block-level data access.

Types of Vservers

A cluster consists of three types of Vservers, which help in managing the cluster and its resources
and the data access to the clients and applications.

A cluster contains the following types of Vservers:

Admin Vserver

Node Vserver

Data Vserver

The cluster setup process automatically creates the admin Vserver for the cluster. A node Vserver
is created when the node joins the cluster. The admin Vserver represents the cluster, and node
Vserver represents the individual nodes of the cluster.

The data Vserver represents the data serving Vservers. After the cluster setup, a cluster
administrator must create data Vservers and add volumes to these Vservers to facilitate data
access from the cluster. A cluster must have at least one data Vserver to serve data to its clients.

Why you use Vservers

Vservers provide data access to clients without regard to physical storage or controller, similar to
any storage system. When you use Vservers, they provide benefits such as nondisruptive
operation, scalability, security and support unified storage.

A Vserver has the following benefits:

Nondisruptive operation

Vservers can operate continuously and nondisruptively for as long as they are needed.
Vservers help clusters to operate continuously during software and hardware upgrades,
addition and removal of nodes, and all administrative operations.

Scalability

Vservers meet on-demand data throughput and the other storage requirements.

Security
13

A Vserver appears as a single independent server, which enables multiple Vservers to


coexist while ensuring no data flows among them.

Unified Storage

Vservers can serve data concurrently through multiple data access protocols. A Vserver
provides file-level data access by using NAS protocols, such as CIFS and NFS, and
block-level data access by using SAN protocols, such as iSCSI and FC (FCoE included).
A Vserver can serve data to SAN and NAS clients independently at the same time.

Note: A Vserver with Infinite Volume can serve data only through NFS and
CIFS (SMB 1.0) protocols.

Delegation of management

A Vserver can have its own user and administration authentication. Vserver
administrators can manage the Vservers that they are authorized to access. However,
Vserver administrators have privileges assigned by the cluster administrators.

Easy Management of large datasets

With Vserver with Infinite Volume, management of large and unstructured data is easier
as the Vserver administrator has to manage one data container instead of many.

Number of Vservers in a cluster

The number of Vservers that you can create in a cluster depends on the number of nodes and
how the LIFs are configured and used in your cluster

The maximum number of nodes supported for Vservers in a NAS cluster is 24, and
in a SAN cluster is 8. If any node in a cluster uses SAN protocols then the entire
cluster is limited to 8 nodes.

Netapp Cluster mode commands cheat sheet

set -privilege advanced (Enter into privilege mode)


set -privilege diagnostic (Enter into diagnostic mode)
set -privilege admin (Enter into admin mode)
system timeout modify 30 (Sets system timeout to 30 minutes)
system node run node local sysconfig -a (Run sysconfig on the local node)
The symbol ! means other than in clustered ontap i.e. storage aggregate show
-state !online (show all aggregates that are not online)
node run -node -command sysstat -c 10 -x 3 (Running the sysstat performance
tool with cluster mode)
14

system node image show (Show the running Data Ontap versions and which is the
default boot)
dashboard performance show (Shows a summary of cluster performance including
interconnect traffic)
node run * environment shelf (Shows information about the Shelves Connected
including Model Number)

DIAGNOSTICS USER CLUSTERED ONTAP Netapp Cluster mode


commands cheat sheet
security login unlock -username diag (Unlock the diag user)
security login password -username diag (Set a password for the diag user)
security login show -username diag (Show the diag user)

SYSTEM CONFIGURATION BACKUPS FOR CLUSTERED ONTAP


system configuration backup create -backup-name node1-backup -node node1
(Create a cluster backup from node1)
system configuration backup create -backup-name node1-backup -node node1
-backup-type node (Create a node backup of node1)
system configuration backup upload -node node1 -backup node1.7z -destination
ftp://username:password@ftp.server.com (Uploads a backup file to ftp)

LOGS
To look at the logs within clustered ontap you must log in as the diag user to a specific node

set -privilege advanced


systemshell -node
username: diag
password:
cd /mroot/etc/mlog
cat command-history.log | grep volume (searches the command-history.log file
for the keyword volume)
exit (exits out of diag mode)

SERVICE PROCESSOR Netapp Cluster mode commands cheat sheet


system node image get -package http://webserver/306-
02765_A0_SP_3.0.1P1_SP_FW.zip -replace-package true (Copies the firmware file
from the webserver into the mroot directory on the node)
system node service-processor image update -node node1 -package 306-
02765_A0_SP_3.0.1P1_SP_FW.zip -update-type differential (Installs the firmware
package to node1)
system node service-processor show (Show the service processor firmware levels
of each node in the cluster)
system node service-processor image update-progress show (Shows the progress
of a firmware update on the Service Processor)

CLUSTER
set -privilege advanced (required to be in advanced mode for the below
commands)
cluster statistics show (shows statistics of the cluster CPU, NFS, CIFS,
FCP, Cluster Interconnect Traffic)
cluster ring show -unitname vldb (check if volume location database is in
quorum)
15

cluster ring show -unitname mgmt (check if management application is in


quorum)
cluster ring show -unitname vifmgr (check if virtual interface manager is in
quorum)
cluster ring show -unitname bcomd (check if san management daemon is in
quorum)
cluster unjoin (must be run in priv -set admin, disjoins a cluster node. Must
also remove its cluster HA partner)
debug vreport show (must be run in priv -set diag, shows WAFL and VLDB
consistency)
event log show -messagename scsiblade.* (show that cluster is in quorum)

NODES
system node rename -node -newname
system node reboot -node NODENAME -reason ENTER REASON (Reboot node with a
given reason. NOTE: check ha policy)

FLASH CACHE
system node run -node * options flexscale.enable on (Enabling Flash Cache on
each node)
system node run -node * options flexscale.lopri_blocks on (Enabling Flash
Cache on each node)
system node run -node * options flexscale.normal_data_blocks on (Enabling
Flash Cache on each node)
node run NODENAME stats show -p flexscale (fashcache configuration)
node run NODENAME stats show -p flexscale-access (display flash cache
statistics)
FLASH POOL
storage aggregate modify -hybrid-enabled true (Change the AGGR to hybrid)
storage aggregate add-disks -disktype SSD (Add SSD disks to AGGR to begin
creating a flash pool)
priority hybrid-cache set volume1 read-cache=none write-cache=none (Within
node shell and diag mode disable read and write cache on volume1)

FAIL-OVER
storage failover takeover -bynode (Initiate a failover)
storage failover giveback -bynode (Initiate a giveback)
storage failover modify -node -enabled true (Enabling failover on one of the
nodes enables it on the other)
storage failover show (Shows failover status)
storage failover modify -node -auto-giveback false (Disables auto giveback on
this ha node)
storage failover modify -node -auto-giveback enable (Enables auto giveback on
this ha node)
aggregate show -node NODENAME -fields ha-policy (show SFO HA Policy for
aggregate)

AGGREGATES
aggr create -aggregate -diskcount -raidtype raid_dp -maxraidsize 18 (Create an
AGGR with X amount of disks, raid_dp and raidgroup size 18)
aggr offline | online (Make the aggr offline or online)
aggr rename -aggregate -newname
aggr relocation start -node node01 -destination node02 -aggregate-list aggr1
(Relocate aggr1 from node01 to node02)
aggr relocation show (Shows the status of an aggregate relocation job)
16

aggr show -space (Show used and used% for volume foot prints and aggregate
metadata)
aggregate show (show all aggregates size, used% and state)
aggregate add-disks -aggregate -diskcount (Adds a number of disks to the
aggregate)
reallocate measure -vserver vmware -path /vol/datastore1 -once true (Test to
see if the volume datastore1 needs to be reallocated or not)
reallocate start -vserver vmware -path /vol/datastore1 -force true -once true
(Run reallocate on the volume datastore1 within the vmware vserver)

DISKS
storage disk assign -disk 0a.00.1 -owner (Assign a specific disk to a node) OR
storage disk assign -count -owner (Assign unallocated disks to a node)
storage disk show -ownership (Show disk ownership to nodes)
storage disk show -state broken | copy | maintenance | partner | percent |
reconstructing | removed | spare | unfail |zeroing (Show the state of a disk)
storage disk modify -disk NODE1:4c.10.0 -owner NODE1 -force-owner true (Force
the change of ownership of a disk)
storage disk removeowner -disk NODE1:4c.10.0 -force true (Remove ownership of
a drive)
storage disk set-led -disk Node1:4c.10.0 -action blink -time 5 (Blink the led
of disk 4c.10.0 for 5 minutes. Use the blinkoff action to turn it off)

VSERVER
vserver setup (Runs the clustered ontap vserver setup wizard)
vserver create -vserver -rootvolume (Creates a new vserver)
vserver show (Shows all vservers in the system)
vserver show -vserver (Show information on a specific vserver)

VOLUMES
volume create -vserver -volume -aggregate -size 100GB -junction-path
/eng/p7/source (Creates a Volume within a vserver)
volume move -vserver -volume -destination-aggregate -foreground true (Moves a
Volume to a different aggregate with high priority)
volume move -vserver -volume -destination-aggregate -cutover-action wait
(Moves a Volume to a different aggregate with low priority but does not
cutover)
volume move trigger-cutover -vserver -volume (Trigger a cutover of a volume
move in waiting state)
volume move show (shows all volume moves currently active or waiting. NOTE:
You can only do 8 volume moves at one time, more than 8 and they get queued)
system node run node vol size 400g (resize volume_name to 400GB) OR
volume size -volume -new-size 400g (resize volume_name to 400GB)
volume modify -vserver -filesys-size-fixed false -volume (Turn off fixed file
sizing on volumes)

LUNS
lun show -vserver (Shows all luns belonging to this specific vserver)
lun modify -vserver -space-allocation enabled -path (Turns on space allocation
so you can run lun reclaims via VAAI)
lun geometry -vserver path /vol/vol1/lun1 (Displays the lun geometry)

NFS
vserver modify -4.1 -pnfs enabled (Enable pNFS. NOTE: Cannot coexist with
NFSv4)
17

FCP
storage show adapter (Show Physical FCP adapters)
fcp adapter modify -node NODENAME -adapter 0e -state down (Take port 0e
offline)
node run fcpadmin config (Shows the config of the adapters Initiator or
Target)
node run -t target 0a (Changes port 0a from initiator or target You must
reboot the node)

CIFS
vserver cifs create -vserver -cifs-server -domain (Enable Cifs)
vserver cifs share create -share-name root -path / (Create a CIFS share called
root)
vserver cifs share show
vserver cifs show

SMB
vserver cifs options modify -vserver -smb2-enabled true (Enable SMB2.0 and
2.1)

SNAPSHOTS
volume snapshot create -vserver vserver1 -volume vol1 -snapshot snapshot1
(Create a snapshot on vserver1, vol1 called snapshot1)
volume snapshot restore -vserver vserver1 -volume vol1 -snapshot snapshot1
(Restore a snapshot on vserver1, vol1 called snapshot1)
volume snapshot show -vserver vserver1 -volume vol1 (Show snapshots on
vserver1 vol1)

DP MIRRORS AND SNAPMIRRORS


volume create -vserver -volume vol10_mirror -aggregate -type DP (Create a
destinaion Snapmirror Volume)
snapmirror create -vserver -source-path sysadmincluster://vserver1/vol10
-destination -path sysadmincluster://vserver1/vol10_mirror -type DP (Create a
snapmirror relationship for sysadmincluster)
snapmirror initialize -source-path sysadmincluster://vserver1/vol10
-destination-path sysadmincluster://vserver1/vol10_mirror -type DP -foreground
true (Initialize the snapmirror example)
snapmirror update -source-path vserver1:vol10 -destination-path
vserver2:vol10_mirror -throttle 1000 (Snapmirror update and throttle to
1000KB/sec)
snapmirror modify -source-path vserver1:vol10 -destination-path
vserver2:vol10_mirror -throttle 2000 (Change the snapmirror throttle to 2000)
snapmirror restore -source-path vserver1:vol10 -destination-path
vserver2:vol10_mirror (Restore a snapmirror from destination to source)
snapmirror show (show snapmirror relationships and status)
NOTE: You can create snapmirror relationships between 2 different clusters by creating a peer
relationship

SNAPVAULT
snapmirror create -source-path vserver1:vol5 -destination-path
vserver2:vol5_archive -type XDP -schedule 5min -policy backup-vspolicy (Create
snapvault relationship with 5 min schedule using backup-vspolicy)
NOTE: Type DP (asynchronous), LS (load-sharing mirror), XDP (backup vault, snapvault), TDP
(transition), RST (transient restore)
18

NETWORK INTERFACE
network interface show (show network interfaces)
network port show (Shows the status and information on current network ports)
network port modify -node * -port -mtu 9000 (Enable Jumbo Frames on interface
vif_name>
network port modify -node * -port -flowcontrol-admin none (Disables Flow
Control on port data_port_name)
network interface revert * (revert all network interfaces to their home port)

INTERFACE GROUPS
ifgrp create -node -ifgrp -distr-func ip -mode multimode (Create an interface
group called vif_name on node_name)
network port ifgrp add-port -node -ifgrp -port (Add a port to vif_name)
net int failover-groups create -failover-group data__fg -node -port (Create a
failover group Complete on both nodes)
ifgrp show (Shows the status and information on current interface groups)
net int failover-groups show (Show Failover Group Status and information)

ROUTING GROUPS
network interface show-routing-group (show routing groups for all vservers)
network routing-groups show -vserver vserver1 (show routing groups for
vserver1)
network routing-groups route create -vserver vserver1 -routing-group
10.1.1.0/24 -destination 0.0.0.0/0 -gateway 10.1.1.1 (Creates a default route
on vserver1)
ping -lif-owner vserver1 -lif data1 -destination www.google.com (ping
www.google.com via vserver1 using the data1 port)

DNS
services dns show (show DNS)

UNIX
vserver services unix-user show
vserver services unix-user create -vserver vserver1 -user root -id 0 -primary-
gid 0 (Create a unix user called root)
vserver name-mapping create -vserver vserver1 -direction win-unix -position 1
-pattern (.+) -replacement root (Create a name mapping from windows to unix)
vserver name-mapping create -vserver vserver1 -direction unix-win -position 1
-pattern (.+) -replacement sysadmin011 (Create a name mapping from unix to
windows)
vserver name-mapping show (Show name-mappings)

NIS
vserver services nis-domain create -vserver vserver1 -domain vmlab.local
-active true -servers 10.10.10.1 (Create nis-domain called vmlab.local
pointing to 10.10.10.1)
vserver modify -vserver vserver1 -ns-switch nis-file (Name Service Switch
referencing a file)
vserver services nis-domain show

NTP
system services ntp server create -node -server (Adds an NTP server to
node_name)
system services ntp config modify -enabled true (Enable ntp)
19

system node date modify -timezone (Sets timezone for Area/Location Timezone.
i.e. Australia/Sydney)
node date show (Show date on all nodes)

DATE AND TIME


timezone -timezone Australia/Sydney (Sets the timezone for Sydney. Type ?
after -timezone for a list)
date 201307090830 (Sets date for yyyymmddhhmm)
date -node (Displays the date and time for the node)

CONVERGED NETWORK ADAPTERS (FAS 8000)


ucadmin show -node NODENAME (Show CNA ports on specific node)
ucadmin -node NODENAME -adapter 0e -mode cna (Change adapter 0e from FC to
CNA. NOTE: A reboot of the node is required)

PERFORMANCE
show-periodic -object volume -instance volumename -node node1 -vserver
vserver1 -counter total_ops|avg_latency|read_ops|read_latency (Show the
specific counters for a volume)
statistics show-periodic 0object nfsv3 -instance vserver1 -counter nfsv3_ops|
nfsv3_read_ops|nfsv3_write_ops|read_avg_latency|write_avg_latency (Shows the
specific nfsv3 counters for a vserver)
sysstat -x 1 (Shows counters for CPU, NFS, CIFS, FCP, WAFL)

Understanding quorum and epsilon


Quorum and epsilon are important measures of cluster health and function that together indicate
how clusters address potential communications and connectivity challenges.
Quorum is a precondition for a fully functioning cluster. When a cluster is in quorum, a simple
majority of nodes are healthy and can communicate with each other. When quorum is lost, the
cluster loses the ability to accomplish normal cluster operations. Only one collection of nodes
can have quorum at any one time because all of the nodes collectively share a single view of the
data. Therefore, if two non-communicating nodes are permitted to modify the data in divergent
ways, it is no longer possible to reconcile the data into a single data view.

Each node in the cluster participates in a voting protocol that elects one node master; each
remaining node is a secondary. The master node is responsible for synchronizing information
across the cluster. When quorum is formed, it is maintained by continual voting; if the master
node goes offline, a new master is elected by the nodes that remain online.

Because there is the possibility of a tie in a cluster that has an even number of nodes, one node
has an extra fractional voting weight called epsilon. When the connectivity between two equal
portions of a large cluster fails, the group of nodes containing epsilon maintains quorum,
assuming that all of the nodes are healthy.
For example, if a single link is established between 12 nodes in one row and 12 nodes in another
row to compose a 24-node cluster and the link fails, then the group of nodes that holds epsilon
would maintain quorum and continue to serve data while the other 12 nodes would stop serving
20

data. However, if the node holding epsilon was unhealthy or offline, then quorum would not be
formed, and all of the nodes would stop serving data.

Epsilon is automatically assigned to the first node when the cluster is created. As long as the
cluster is in quorum, if the node that holds epsilon becomes unhealthy or is taken over by its
highavailability partner, epsilon is automatically reassigned to a healthy node.
In general, assuming reliable connectivity among the nodes of the cluster, a larger cluster is more
stable than a smaller cluster. The quorum requirement of a simple majority of half the nodes plus
epsilon is easier to maintain in a cluster of 24 nodes than in a cluster of two nodes.

A two-node cluster presents some unique challenges for maintaining quorum. In a two-node
cluster, neither node holds epsilon; instead, both nodes are continuously polled to ensure that if
one node fails, the other has full read-write access to data, as well as access to logical interfaces
and management functions.

What a cluster replication ring is


A replication ring is a set of identical processes running on all nodes in the cluster.