Вы находитесь на странице: 1из 6

SnapMirror overview

What SnapMirror is

The Data ONTAP SnapMirror feature enables an administrator to mirror


snapshot images from a source volume or qtree to a partner destination volume
or qtree, thus replicating source object data on destination objects at regular
intervals. You can access the information on the destination volume or qtree to

• Provide users quick access to mirrored data in the event of a disaster that
makes the source volume or qtree unavailable.
• Update the source to recover from disaster, data corruption (qtrees only),
or user error
• Archive the data to tape
• Balance resource loads
• Back up or distribute the data to remote sites

The components of SnapMirror

The basic SnapMirror deployment consists of the following components.

Source volumes or qtrees

SnapMirror source volumes and qtrees are writable data objects whose data is to
be replicated. The source volumes and qtrees are the objects normally visible,
accessible, and writable by the filer clients.

Destination volumes or qtrees

The SnapMirror destination volumes and qtrees are read-only objects, usually on
a separate filer, to which the source volumes and qtrees are replicated. The
destination volumes and qtrees are normally accessed by users only when a
disaster takes down the source volumes or qtrees and the administrator uses
SnapMirror commands to make the replicated data at the destination accessible
and writable.
Snapmirror Commands
NAME
na_snapmirror - volume, and qtree mirroring

SYNOPSIS
snapmirror { on | off }

snapmirror status [ options ] [ volume | qtree ... ]

snapmirror initialize [ options ] destination

snapmirror update [ options ] destination

snapmirror quiesce destination

snapmirror resume destination

snapmirror break destination

snapmirror resync [ options ] destination

snapmirror destinations [ option ] [ source ]

snapmirror release source destination

snapmirror { store | retrieve } volume tapedevices

snapmirror use destination tapedevices

snapmirror abort [ options ] destination ...


snapmirror migrate [ options ] source destination

EXAMPLES
Here are a few examples of use of the snapmirror command:

The following example turns the scheduler on and off:

test1> snapmirror on
test1> snapmirror status
Snapmirror is on.
test1> snapmirror off
test1> snapmirror status
Snapmirror is off.
test1>
The following example presents the snapmirror status with transfers running. Two are
idle destinations (both from fridge); one of these has a restart checkpoint, and could be
restarted if the setup of the two volumes has not changed since the checkpoint was made.
The transfer from vol1 to arc2 has just started, and is in the initial stages of transferring.
The transfer from test1 to icebox is partially completed; here, we can see the number of
megabytes transferred.
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:arc1 Snapmirrored 22:09:58 Idle
test1:vol1 test1:arc2 Snapmirrored 01:02:53 Transferring
test1:vol2 icebox:saved Uninitialized -
Transferring (128MB done)
fridge:users test1:arc3 Snapmirrored 10:14:36 Idle with
restart checkpoint (12MB done)
test1>
The following example presents detailed status for one of the above snapmirror
relationships specified as argument to the command. It displays extra information about
base snapshot, transfer type, error message, and last transfer, etc.
test1> snapmirror status -l arc1
Snapmirror is on.

Source: fridge:home
Destination: test1:arc1
Type: Volume
Status: Idle
Progress: -
State: Snapmirrored
Lag: 22:09:58
Mirror Timestamp: Wed Aug 8 16:53:04 GMT 2001
Base Snapshot: test1(0001234567)_arc1.1
Current Transfer Type: -
Current Transfer Error: -
Contents: Replica
Last Transfer Type: Initialize
Last Transfer Size: 1120000 KB
Last Transfer Duration: 00:03:47
Last Transfer From: fridge:home
The following example shows how to get all the volumes and qtrees that are quiesced or
quiescing on this filer with the status command.
filer> snapmirror status -q
Snapmirror is on.
vol1 has quiesced/quiescing qtrees:
/vol/vol1/qt0 is quiesced
/vol/vol1/qt1 is quiescing
vol2 is quiescing
The following example starts writing an image of vol1 on test1 to the tape on tape device
rst0a and continues with the tape on rst1a. When the second tape is used up, the example
shows how to resume the store using a new tape on rst0a.
test1> snapmirror store vol1 rst0a,rst1a
snapmirror: Reference Snapshot:
snapmirror_tape_5.17.100_21:47:28
test1>
SNAPMIRROR: store to test1:rst0a,rst1a has run out of tape.
test1> snapmirror use test1:rst0a,rst1a rst0a
test1>
Wed May 17 23:36:31 GMT [worker_thread:notice]: snapmirror:
Store from volume 'vol1' to tape was successful (11 MB in 1:03 minutes,
3 tapes written).
The following example retrieves the header of the tape on tape device rst0a. It then
retrieves the image of vol1 from the tape on tape device rst0a.
test1> snapmirror retrieve -h rst0a
Tape Number: 1
WAFL Version: 12
BareMetal Version: 1
Source Filer: test1
Source Volume: vol0
Source Volume Capacity: 16MB
Source Volume Used Size: 11MB
Source Snapshot:
snapmirror_tape_5.17.100_21:47:28
test1>
test1> snapmirror retrieve vol8 rst0a
SNAPMIRROR: retrieve from tape to test1:vol8 has run out of
tape.
test1> snapmirror use test1:vol8 rst0a
SNAPMIRROR: retrieve from tape to test1:vol8 has run out of
tape.
test1> snapmirror use test1:vol8 rst0a
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
test1:rst1a,rst0a test1:dst1 Unknown - Transferring
(17MB done)
test1>
Wed May 17 23:54:29 GMT [worker_thread:notice]: snapmirror:
Retrieve from tape to volume 'vol8' was successful (11 MB in 1:30
minutes).
The following example examines the status of all transfers, then aborts the transfers to
volm1 and volm2, and checks the status again. To clear the restart checkpoint,
snapmirror abort is invoked again.
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:volm1 Uninitialized -
Transferring (10GB done)
fridge:mail test1:volm2 Snapmirrored 01:00:31
Transferring (4423MB done)
test1> snapmirror abort test1:volm1 volm2
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:volm1 Snapmirrored 00:01:25 Idle
fridge:mail test1:volm2 Snapmirrored 01:03:11 Idle with
restart checkpoint (7000MB done)
test1> snapmirror abort test1:volm2
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:volm1 Snapmirrored 00:02:35 Idle
fridge:mail test1:volm2 Snapmirrored 01:04:21 Idle
The following example examines the status of all transfers, then aborts the transfers to
volm1 and volm2 with the -h option and checks the status again. No restart checkpoint is
saved.
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:volm1 Uninitialized -
Transferring (10GB done)
fridge:mail test1:volm2 Snapmirrored 01:00:31
Transferring (4423MB done)
test1> snapmirror abort -h test1:volm1 test1:volm2
test1> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
fridge:home test1:volm1 Snapmirrored 00:02:35 Idle
fridge:mail test1:volm2 Snapmirrored 01:04:21 Idle
Here is an example of the use of the snapmirror migrate command:
test1> snapmirror migrate home mirror
negotiating with destination....
This SnapMirror migration will take local source volume home and complete a final
transfer to destination test1:mirror using the interface named test1. After that, open NFS
filehandles on the source will migrate to the destination and any NFS filehandles open on
the destination will be made stale. Clients will only see the migrated NFS filehandles if
the destination is reachable at the same IP addresss as the source. The migrate process
will not take care of renaming or exporting the destination volume.

As a result of this process, the source volume home will be taken offline, and
NFS service to this filer will be stopped during the transfer. CIFS service on the
source volume will be terminated and CIFS will have to be set up on the
destination.
Are you sure you want to do this? yes
nfs turned off on source filer
performing final transfer from test1:home to mirror....
(monitor progress with "snapmirror status")
transfer from test1:home to mirror successful
starting nfs filehandle migration from home to mirror
source volume home brought offline
source nfs filehandles invalidated
destination test1:mirror confirms migration
migration complete
test1> vol status
Volume State Status Options
root online normal root, raidsize=14
mirror online normal
home offline normal
test1> vol rename home temp
home renamed to temp
you may need to update /etc/exports
test1> vol rename mirror home
mirror renamed to home
you may need to update /etc/exports
test1> exportfs -a

NOTES
If a source volume is larger than the replica destination, the transfer is disallowed.

Notes on the snapmirror migrate command:

The migrate command is only a partial step of the process. It is intended to work
when an administrator desires to move the data of one volume to another,
possibly because they want to move to a new set of disks, or to a larger volume
without adding disks.

We intend that migrate be run in as controlled an environment as possible. It is


best if there are no dumps or SnapMirror transfers going on during the migration.

The clients may see stale filehandles or unresponsive NFS service while migrate
is running. This is expected behavior. Once the destination volume is made
writable, the clients will see the data as if nothing has happened.

migrate will not change exports or IP addresses; the new destination volume
must be reachable in the same way as the source volume once was.

CIFS service will need to be restarted on the migrate destination.

Вам также может понравиться