Вы находитесь на странице: 1из 11

PEER MOTION WHAT IT IS

Non-disruptive migration of data from one InServ to another


1. This means no loss of access to the data during migration; there is a performance
impact during the process
No modifications to legacy InServ (can be 2.2.4 or 2.3.1)
1. Remember F & T can be upgraded to 3.1.1 and thus be the target as well as a source
Migrate volumes to fat or thin volumes
1. Regardless of what the original being fat or thin
Separately licensed feature

1
PEER MOTION WHAT IT IS NOT IN PHASE 1

A way to migrate from 3rd party arrays


A solution for clustered hosts
Able to maintain snapshot trees
1. Snapshots can be migrated, but become base volumes
True peers
1. No modification of legacy array
2. Needs configured up for period of migration

2
PEER MOTION ADMITVV
Admits a volume from a remote array and makes it ready for export
back to the host
Creates a VV of type peer with no_cache policy and the same WWN
as the volume on the remote array
Volume is entirely backed by the remote array this step does not
involve copying any data to the new array
admitvv [-domain <domain>] vvname:wwn

3
PEER MOTION IMPORTVV
Imports an admitted volume
Can import to fat or thin VV
Places a SCSI3 reservation on the remote VV
Switches the entire VV in one pass (remote array has full copy of data
until process complete)
Can take snapshots at import switchover
New task type (import_vv)
importvv [-snp_cpg <cpg>] [-snap <snapname>] [-tpvv] <cpg> <vv>

4
Converged Migration HP 3PAR Peer Motion
Traditional Block Migration HP 3PAR Peer Motion
Complex, time-consuming, risky 1st Non-Disruptive DIY Migration for Enterprise SAN

Simple
Extra tools Downtime
SW Fool-proof
Online
SLA Planning MIGRAT Non-disruptive
risk Appliance E Any-to-any 3PAR
Complex, Thin Built-In
post-process
thinning

With Peer Motion, you can:


load balance at will
perform tech refresh seamlessly
cost-optimize Asset Lifecycle Management
lower tech refresh CAPEX (thin-landing)

With Peer Motion, customers can:


= Block Migration
Load balance at will
Approaches
Perform tech refresh seamlessly
Cost-optimize Asset Lifecycle
Management
Lower tech refresh CAPEX (thin-
5
landing)
Migration Phases
Primary Path
Storage System-to-System Configuration Secondary Path
Connect Source & Destination Systems via SAN Data Migration
Configure ports on each system Path

Import System configuration


Identify Destination System as a host on Source
system
Make data volumes on Source System visible to
Destination System
Destination System-to-Host Configuration
Connect Destination System to host
Export peer volume(s) to host
Verify I/O active on all volume paths to host
Un-zone paths from Host to Source System

6
Migration Phases (continued)
Primary Path
Migrate Data Secondary Path
User selects volume QoS (vol type, drive type, Data Migration
RAID, HA) parameters Path
Data migration begins
Data is replicated from Source System to
Destination System
Zeros are detected and removed before landing
on Destination System
Host I/O continues via Destination System
Source and Destination volumes remain in sync
during migration
Data migration completes
Post-migration Clean Up
Remove volume export from Source System
Remove identification of Destination System from
Source System

7
DIY Migration with Peer Motion Manager
Connect Storage Systems & Hosts

Import System Configuration


Users, Hosts, Virtual Domains, NTP/Syslog
information (Automatic)

Select Host for Migration


All volumes from Source System exported to
Host are admitted in Peer mode and exported
from Destination System to Host (Automatic)

Verify I/O on New Paths, then De-activate


Old Paths

Import Volumes
Volume data is replicated from Source-to-
Destination System, Host I/O remains active

= Automated
Post Migration Cleanup = Manual
8
BEYOND VIRTUALIZATION: STORAGE
FEDERATION
Virtualization Federation

The delivery of consolidated or


The delivery of distributed volume
distributed volume management
through appliances that hierarchically management across a set of self-
control a set of heterogeneous governing, homogeneous, peer
storage arrays storage arrays

Pros Pros
Broader, heterogenous array support
Less expensive
Cons
More expensive (dual controller layer) Minimized failure domains
Additional failure domains Simpler administration
Lowest common denominator function Cons
Likely additional administration No heterogeneous array support

9
Peer Motion Competitive Differentiator
Key Attributes Peer Motion EMC FLM HDS USP IBM XIV
Data Data
Migration Migration
Online data migration
Non-disruptive data migration No No
(Disruptive to (Disruptive to
add) add)
Migration between systems on
different SW versions
Migration between midrange & No
enterprise systems (DMX-to-
VMAX only)
Fat-to-Thin conversion ? ? ?
Land Thin
No new layers No No
Peer-based (Virtualization (Temporary
controller) virtualization
layer)
Data migration automation No ? ?
Peer Motion (Manual)
Mgr
10
PEER MOTION SUPPORTED ENVIRONMENT
Peer Motion ideal for non-disruptive data migration for following:
Hosts: Windows, Solaris, or Linux host environments
Source Storage: HP 3PAR Systems running InForm OS 2.2.4, 2.3.1, or 3.1.1
Source Storage Volume: Thin or Fat Volume
No existing snapshots
Not part of a replication group

11

Вам также может понравиться