Академический Документы
Профессиональный Документы
Культура Документы
Solaris--pkg
aix--installp
Its far better to buy a wondeful Company at a FAIR PRICE then a FAIR Company at
a WONDEFUL Price!!!
SSP Request-- for 2010 office
for getting License order.ART Request For 2007
********************************************************************************
********************************************************************************
*********************************
The clariion basic architecture allows for flexible host attached through optica
l Fibre channel or copper iSCSI ports in the array.
* The connection to all back-end disks is through Fibre channel loops, implement
ed using copper cables.
FC5500 was the first full fibre channel storage system. in 1997
Cx 200,400,600 2002
CX 300,500,700 2003
CX 300i,500i 2005
CX3-20, 40,80 2006
CX3-10c, 20c,20f,40c,40f 2007
CX4- 120, 240, 480 480P, 960 2008--- 4th Generation of CX
DataIntegrity features:
-Mirrored Write cache -- ensures both SP's contain up-to-date, identical, write
data.
-De-stage writecache to disk upon power failure
-SNiiF verify
-Background verify - per RAID Group
Vault Drives:
Drives 0-4 in the first enclosure on CX Series
Drives 0-9 in the first enclosure on FC Series
Holds write cache content in the event of a failure.
Persistent Cache
Write cache data is maintained under these scenarios:
Non-dsiruptive upgrades (NDUs)
Single Storage Processor (SP) Restart
Single Storage Processor (SP) Removed
SP or I/O module replacement or repair
SPS singlr SP hard fault or transient h/w failure
Power supply failure
Burst Smoothing- Absorbs bursts of writes into memory, avoids disks becoming a b
ottleneck.
Locality: Afeature which Merges Several writes to the same disk area (Stripe) in
to a single operation.
* The DAE3P-Os which is located at Bus 0, enclosure 0 contains the system disks
in the first five slots.
these disks store the storage Processor OS and configuration information.
the DAE3P-Os can also contain upto 10 additional data disks.
ALUA-- Asymmetrix Logical Unit Access is a SCSI communication standards.
It allows host to see a LUN over paths through both SPs, even though the LUN is
still owned by only one of the SPs.
Although this allows a LUN to be seen over both SPs, the data will normally be s
ent to the LUN over an "Optimal" path that goes through the owning SP.
It is a request forwarding implementation that honors the storage system's LUN o
wnership feature. however when necessary orr appropriate, it allows I/O to route
through
either SP, when the data is transmitted to the LUN over a path through the non-o
wning SP, the data is redirected to the owning SP over the CMI path to the disk
holding the LUN.
BEST PRACTICES for ALUA Failover/Performance
* Balance LUN ownership b/w the 2 SPs.
* Ensure that hosts are still using optimal paths to their LUNs.
* Ensure that all LUNs are returned to the default owner(SP) and Non-Disruptive
Upgrades (NDUs).
* Incase of failure or performance issue, ensure I/O is routed through the opti
mal path.
*ALU with Powerpath is the optimal solution.
there are 2 Power supply per CPU so total of 4 Powersupply in each system.
Both power supply for a SP must be removed before the CPU Module can be removed.
The base CX-120 model contains a single SPS until mounted as SPS-A.
the connection that would normally attach to SPS-B is connected to the PDU (Powe
r Distribution Unit).
If the optional 2nd SPS is ordered, the power connections are made the same as o
ther CX-240/480 models.
In CX-240 & 480 Models:
SPS-A --- PS-A0 & PS-B0
SPS-B--- PS-A1 & PS-B1
The SPS units should only be used to power the SPE & first DAE.
The management LAN IP address is provided by the customer and set during array i
nitialization.
the Service LAN has a fixed IP address and subnet Mask.
GbE Service LAN -- (SPA=128.221.1.250)
(SPB=128.221.1.251)
NMI (NonMaskable Interrupt) button forces a memory dump of the SP's memory and s
hould only be used under direction of Engineering or Tech Support Level 2.
New SATA Flash Drives use SATA II Protocol natively within flash drive & use a p
addle card to present the FC interface to the storage system.
They can be used interchangeably with FC flash drives.
-- SATA FLASH drives may be combined with FC FLASH drives & conventional FC HDDs
in the same enclosure.
-- SATA FLASH drives may be combined with FC FLASH drives in the same RAID grou
p.
--SATA FLASH drives can spare for FC FLASH drives & vice versa.
--As is the case with any type of flash drive, HDDs cannot spare for FLASH drive
s.
Drive Spin dowm:
It is the CLARiiON's 2nd Generation Spindown Product, with the 1st being availab
le for only the CLARiiON Disk Library.
Using this feature ,the disk drive transition into a low power state when idle -
by spinning down the spindle motor (0RPM)- creates a power saving of 55 %- 60 %
.
Unisphere /CLI shd be used to configyre this. This feature can be enabled either
at the Array or RAID Group Level.
* this feature is only available on CX4 platform.
*Vault drives (Drives in slot 0-4) will not be eligible for spin down & spindown
will not be available for any layered features( MetaLUN, thin LUN, WIL,RLP, CLP
etc)
*Only SATA drives are qualified to participate in power savings.
The cables conform to industry std Fibre Channel Passive SFP connectors
Copper FC Cables utilize industry std HSSDC2 connections on the DAE side.
* SFP to HSSDC2 cables are only required for SP to the first DAE.
* HSSDC2 to HSSDC2 cables are required for DAE to DAE Connections.
*2/4 Gb/s on FC Copper cable support.
DAE3Ps:
-Supports upto 15 low-profile, 2/4Gb/s FC disk Drives.
- Two 4Gb/s LCCs
- two Power supply/Cooling Modules
-Uses same cables as DAE2P Enclosure
* 8 Meters max b/w DAE3Ps
* 5 Meters max b/w SPEs & DAEs
DAE3P Rear View:
The SPA loop components are mounted in the bottom ofthe enclosure and the SPB lo
op components are mounted in the top of enclosure.
* Each power supply in DAE3P enclosure is capable of powering the entire enclosu
re in the event of power supply failure.
* the 2 LCCs cards connect to all the disks in the enclosure & operate independe
ntly.
** The Loop ID & Enclosure ID on both LCC cards within an enclosure must always
match.
DAE3P Rules:
* the first 5 drives in the OS enclosure (BE0,Enc 0) must all be of equal size/s
peed.
If possible keep all drives in a chassis of same size & speed. If not , try to i
nstall drives of same types in groups of five.
* Donot mix 2Gb/s drives with 4Gb/s drives in a DAE3P enc.
*ATA Drives are not allowed in a FC enclosure Chassis.
empty drive slots must be filled with drive fillers.
Terminology:
Production Host-- Server where customer applns are executed.
Backup host-- Host where backup processing occurs.
Admsnap utility-- An exe prgm which runs interactively or with a script to manag
e clones/ snapshots.
Source LUN- Production LUN
Activate-- Maps a snapshot to an available snapshot session
Snapshot-- A Point-in-time copy of a source LUN
RLP-- Reserved Lun Pool - Private area used to contain CoFW data.
It hols all the original data from the source LUN when the host writes to a chun
k for the first time.
Chunk-- Are an aggregate of multiple disk blocks that SnapView uses to perform C
oFW operations. size is set to 64KB.
CoFW-- When a chunk is changed on the Source LUN for the first time, data is cop
ied to a reserved area.
Session--
to start a tracking mechanism & create a virtual copy that has the potential to
be seen by a host, we need to start a session.
A session is associated with one or more snapshots, each of which is accociated
with a unique Source LUN. Once a session has been started, data is moved to the
RLP as required by the CoFw mechanism.
We need to activate the snapshot to make it appear online to the host.
Fracture-- Process of breaking off a clone from its source, once a clone is frac
tured ,
Any server I/O requests made to Source LUN, fater you fracture the clone, are no
t copied to the clone unless you manuallu perform synchronisation.
Synchronising a fractured clone unfractures the clone & updates the contents on
the clone with its source LUN.
Clone group-- Contails Source LUN & all of its clones. when u create a clone gro
up, u specify a LUN to be cloned, this LUN is referred to as Source LUN.
Once u create a clone group the snapview assigns a unique id to group.
Clone Private LUN-- it record info. about data chunks modified in the source LUN
& clone LUN after you have fractured the clone.
Amodified data chunk is a chunk of data that a server chages by writing to the c
lone / source LUN.
Reserved Average Source LUN=
total size ofsource LUNs/ Number of Source LUNs
Reserved LUN size = 10% average source LUN size ( CoFW Factor)
Create twice as many Reserved LUNs as source LUNs ( Overflow LUN factor)
Mirror View:
Terminologies:
Mirror Synchronisation: Mechanism to copy data from the primary LUN to a seconda
ry LUN
Mechanism may use fracture log/write intent log to avoild full data copy.
Mirror Fracture:
-condition when a secondary is unreachable by the primary
-Can be invoked by admin command.
MirroR Availability States:
-Inactive-- Admin control to stop mirror processing
- Active -- I/O allowed ( normal State)
-Attention -- Admin action req
Mirror Promote:
-changes an image's role
Mirror Data States:
-Out of Sync- Full sync needed
- In sync- Primary LUN & Secondary LUN contain identical data
- Rolling back-- Primary LUN is returned to state at a previously defined point
in time.
-Consistent-- Write intent log, or fracture log, may be needed
-Synchronizing- Mirror sync operayion in progress
Fracture Log is a bitmap held in the memory of the storage processor that owns t
he primary image.
The log indicates which physical areas of the primary have been updated since co
mm was interrupted with the secondary.
*** Write Intent Log---
the record of recent changes to the primary image is stored in persistent memory
on a private LUN reserved for the mirroring s/w. If the primary storage system
fails
the optional write intent log can be used to quickly synchronise the secondary i
mages when the primary storage system becomes available.
This eliminates the need for full sync of secondary images.
The write intent log keeps track of writes that have not yet been made to the re
mote image for the mirror. It allows for fast recovery.
Scalability
- upto 4,096 LUNs
- upto 480 drives.
Data Cabling:
Bus 0 - I/O Module 0. Port 0
Bus 1 - I/O Module 0. Port 1
Bus 2 - I/O Module 1. Port 0
Bus 3 - I/O Module 1. Port 1
CX 960 Overview:
Ultrascale Architecture:
-4 Quad Core 2.33Ghz Clovertown CPU Modules.
-32 GB System Memory
Connectivity
4096 HA hosts.
- upto 12 I/O Modules
16 FE 1 gb/s iSCSI Hosts
24 FE 4 Gb/s FC host port
32 BE 4 Gb/s FC Disk port
8 FE 10Gb/s iSCSI host ports.
Scalability
-upto 4096 LUNs
-upto 960 hosts.
CX4-960 use Processor throttling, a mechanism that enables dynamic switching of
the CPU core voltage & CPU Clock frequency.
This slowdown the processor/Clock speed, reducing the heat generation.
This feature is only available on CX-960 platform.
Rule for hot spare:
there shd be one hot spare Drive for upto 30 drives.
MirrorView thinLUNs:
with R29+ , primary & secondary images can be created on thin LUNs
All combinations of Thin & R29+ Traditional LUNs are allowed.
Thin LUNs & Pre-R29 LUNs are NOT allowed to co-exist in any relationship.
Once the R29+ s/w package is committed, all Pre-R29 Traditional LUNs becomes R29
+ Traditional LUNs
MirrorView thin LUN Checks:
-- when adding a secondary Thin LUN or Synchronising existing mirrors, ensure th
at the Thin Pool has enough capacity for the syntn to complete. else a MEDIA FAI
LURE adminstrative fracture will occur.
However in case of MirrorView/S it checks for the secondary images size before s
tarting syn.
Business Uses of SAN Copy Replication S/w:
1) Data Migration
2) Data Mobility
3) Content Distribution
4) Disaster Recovery
SAN Copy s/w runs on a SAN copy storage system. SAN Copy copies data at a block
level.
It copies data directly from a logical unit on one storage system to destination
logical units on another, without using host resources.
SAN Copy Terminologies:
LUN-- clariion Logical Unit
Source LUN-- the LUN to be replicated
Destination LUN-- the LUN where the data will be replicated.
SAN Copy Session-- Persistent definition consisting of a source and Destination
LUNs.
SAN copy Storage System-- System on which SAN copy is licensed.
SAN copy Port-- Clariion SP port used by SAN Copy
Quiesce-- Hault all I/O on a LUN.
Block Level Copy-- Reads & writes at the block level (Whole LUN)
PUSH-- SAN copy system reads data from one of its LUNs, writes data to destinati
on LUNs.
PULL-- SAN copy system Reads from a source LUN, wrtres data to one od its LUNs.
** Before SAN Copy session can be created, the enabler must be installed on the
CLARiiON.
Physical connections must be established b/w storage sysm.
Each SAN Copy port participating in a session must be zoned to one or more ports
of each SP that participates in the session.
Logical connections must be established b/w storage systems. -- SAN copy initai
ators must be added to Storage Groups.
SAN copy uses SnapView to create a snapshot of the source LUN, & actually reads
from the snapshot during the update, so there is a consistent PIT view of data b
eing transffered.
A fill SAN copy session copies the entire contents to thedestination LUN. Inorde
r to create a consistent copy, Writes must be quiesced on the source LUN for the
duration of the session.
Incremental SAN Copy (ISC) allows the transfer of changed data only from source
to destination.
SAN Copy supports Thin LUNs with Flare R29 and greater. both source and destinat
ion LUNs can be thin LUNs.
***** Exception--- >>> With pull copies-- When the source is on the remote array
, the Copy cannot be provisioned as Thin. It can only be provisioned as traditio
nal copy.
SAN Copy/E copies data from CX 300 & AX series storage systems to CX Series Stor
age Systems running SAN copy.
********************************************************************************
********************************************************************************
********************************************************************************
*
********************************************************************************
********************************************************************************
********************************************************************************
*
CONNETRIX FOUNDATION
********************************************************************************
********
********************************************************************************
********
* for SAN Arrays LUN is the fundamental unit of block storage that can be provis
ioned.
the host's disk driver treats the array LUN identically to a Direct-attached dis
k spindle, presenting it to the OS as a raw device/character device.
Where as in NAS- the NAS Appliance presents storage in the form of a file system
that the host can mount & use via network Protocols (NFS/CIFS).
** FABRIC-- Is a logically defined space in which Fibre Channel nodes can commun
icate with each other.
Incase od DAS: Due to Static Configuration, the bus needs to be quiesced for eve
ry device reconfiguration. every connected host loses access to all storage on t
he bus during this process.
In Parallel SCSI : Devices on the bus must be set to a unique ID in the range of
0 to 15. Addition of new devcies need carefull planning.
FC-SW switched Fabric ==> uses a 24-bit address to route traffic, & can accomoda
te as many as 15 million devices in a single fabric.
A FC HBA is a standard PCI or Serial Bus peripheral card on the host computer, j
ust like a SCSI adapter.
Fibre channel is a serial data transfer interface that operates over copper wire
or optical fiber at data rates upto 8Gbps & upto 10Gbps when used as ISL on sup
ported switches.
SCSI Commands are mapped to Fibre Channel constructs, then encapsulated & transp
orted within Fibre Channel frames., which heps in high speed transfer of multipl
e protocols over same physical interface.
FC-0 Physical Interface optical & Electrical Interfaces cables, connetors etc.
FC-1 Encode/Decode Link control Protocols, ordered sets.
FC-2 Exchange & Sequence Management, Frame Structure, Flow Control
FC-3 Common Srvices
FC-4 SCSI-3 FCP, FC Link encapsulation, Single byte Command Code sets etc.
Upper Layer Protocol SCSI -3 , IP, ESCON/FIPS, etc.
FC Frames:=> contains-- Header info, data, CRC & frame delineation markers.
Max data carried in a frame is 2112 bytes with the total frame size @ 2148 byte
s.
Header contains the source & Destination addresses.
Typefield interpretation is dependent on whether the frame is a link control or
a FC data frame.
FC addresses are 24 bits in length, unlike MAC these are not burntin but are as
signed when a node enters the loop/ is connected to switch.
3 bytes --- 24 bits (domain-- Area(port)--AL_PA)
dimain-Area ==> FC-Sw
AL_PA ==> FC-AL
AL_PA is generally 00 unless the device is in a Arbitrated Loop.
Physical address is switch specific & dynamically generated by the fabric login.
A WWN, is a 64-bit address used in FC networks to uniquely identify each element
in the network.
It is burnt on device by vendor.
Fan-Out Ratio: Maximum number of initiators that can access a single storage por
t through a SAN
Fan-In Ratio: Max number of storage ports that can be accessed by a single initi
ator through SAN.
Topology Model:
1) Mesh Fabrictopology: partial, Full all switches are connectedto each other.
2) Compound Core-Edge Fabric: Core connectivity tier is made up of swithes conf
igured in a full mesh.
- Core tiers are only used for ISLs
- Edge tiers are used for host or Storage Connectivity.
3) Simple Core-Edge Fabric
-- Can be 2/3 tier, Single Core tier, 1/2 Edge tier,
-- In 2 tier topology, storage is connected to core & hosts are connected to E
dge.
Switches are connected to eachother in a fabric using ISLs. This is accomplished
by connecting them to eachother through an expansion Port on the switch.
ISLs are used to transfer node-tonode data traffic, aswell as fabric management
traffic from one switch to another.
Oversubscription Ratio: Measure of the theoritcal utilization of an ISL.
If possible aviod - host to storage connectivity whenever performance requiremen
ts are stringent.
IF ISLs are unavoidable, the performance impliccations shd be carefully consider
ed during design stage.
Best Practices while Adding ISLs in a Fabric:
Always connect each switch to atleast 2 other switches in a fabric.this prevents
a single link failure frm causing toatl loss of connectivity to nodes on that s
witch.
FCoE- Fibre Channel Over Copper-- is a new technology protocol iin the process o
f being defined by the T11 stds Committee.
As a physical interface it uses CNA Conergent N/W Adapters.
B-Series has webtools & Connetrix Manager as a Web GUI management console.
MDS-Series has Cisco Fabric Manager and Device Manager.
SAN Manager, an intergral part of EMC ECC, provides some management & monitoring
capabilities for devices from both vendors.
Connetrix Manager is a Java based Licensed Tool
Fabric & Device Manaer must be installed in a server & can contain several clien
ts.
this simplifies mgmt of MDS series thro intergrated approach to fabric administr
ation, device discovery, topology mapping, & config functions for the switch, fa
bric, & port.
Port type Security refers to the ability to restrict which kind of functions a s
witch port can assume.
Example: A switch port can be restricted to only functioning as an F_port or an
E_port.
CHAP authentication applies to iSCSI interfaces.
Persistent port disable means that a port remains disabled across reboots.
When a deivce logs into a fabric, it registers with the name server.
When a port logs into a fabric, it goes through a device discovery process with
other devices registered as SCSI FCP in the name server.
* the zoning function controls this process by only letting ports in the same zo
ne establish these link level services.
A collection of Zones is called ZoneSet./ Zone config
EMC Recommeds Single HBA Zoning: Consists of a single HBA port & 1/ more storage
Ports.
- A separate zone for each HBA.which makes zone mgmt easier when relacing HBAs
A port can reside in multiple zones.
VSANs: Are the user defined Logical SANs, VSANs enhance overall fabric security
by creating multiple logical SANs over a singlr h/w infrastructure.
A VSAN can exist within a single-switch chassis or span multiple chasses.
Nodes on one VSAN cannot connect to nodes on other VSAN.
Each VSAN has its own Active zone Set , name server, and routing protocol.
EMC currently supports 20 VSANs
Ingress & Egress Ports:
LUN Masking Prevents multiple host from trying to access the same volume present
ed on the common storage port.
*** For Every I/O, the powerpath filter driver looks at the Volume Path Set & se
lects the path based on the load balanceing policy & failover settings for a dev
ice.
** PowerPath deosnot manage the I/O Queue ; it manages the placement of I/O requ
ests in the queue.
PowerPath Load Balancing Policies:
-- Symm_opt/ CLAR_opt/ Adaptive (default)
-- I/o requests are balanced across multiple paths based on composition
of reads , writes and user-assigned device/appln priorities.
-- Default policy on systems with a valid Poerpath license.
-- Round Robin
-- I/O requests are distributed to each available path in turn.
--Least_I/Os
-- Requests are assigned to the path with the fewest number of requests
in the queue.
--Least_Blocks
-- Requests are assigned to path with the fewest total blocks in the que
ue.
--Request
-- Path failover only
--No redirect
-- Disable path failover & load balancving
-- Default for Symmetrix when there is no license key
-- Default on CLARiiON with base license.
-- Basic Failover
-- Powerpath fabric failover functionality
-- Default for CLARiiON & symmetrix when there is no license key.
PowerPath Automatic Path Testing & Restore:
-- Auto-Probe : Tests for dead paths
-- Periodically probes inactive paths to identify failed paths before se
nding user I/O to them.
--Auto-restore: tests for restored paths
-- Periodically probes failed/closed paths to determine if they have bee
n repaired.
Auto-probe function uses SCSI Inquiry commands for the probe, so that even a no
t-ready device returns successfully.
the powerpath GUI is used to
Configure, Monitor & Manage Powerpath devices.
Powerpath Administartor has two panes:
1) on left side is the Scope Pane- where powerpath objects are displayed in a h
ierarchical list.
2) On right side is the Result Pane- that provides a view of configuration stati
stics for PowerPath Objects selected in the scope pane.
Powerpath CLI Commands:\
Powermt check
Powermt check_reistration
Powermt config
Powermt display
Powermt display options
Powermt display paths
Powermt load
Powermt restore
Powermt remove
Powermt save
Powermt set mode
Powermt set periodic_autorestore
Powermt set policy
Powermt set priority
Powermt set write_throttle
Powermt set write_throttle_queue
Powermt Version
PowerPath Encryption with RSA is a host-based data-at-rest encryption solution u
tilizing EMC Powerpath & RSA Key Manager for the Datacenter.
-- It safeguards data in the event that a disk is removed from an array or unaut
horised data access.
--Centralised Key Management
-- Consistent Encryption Technology
--Flexible Encryption-- chooses LUN or volumes to encrypt.
--Replication support.
EMC PowerPath Migration Enabler is a solution that leverages the same underlying
technology as PoerPath, & enables other technologies like array-based replicati
on & virtualisation, to eliminate the application downtime during the data migra
tions or virtualization implementations.
A Domain : is a group of systems that a user can manage from the same management
application session. each domain has a directory that defines the systems in a
domain.
to add a CLARiiON to a domain, user can scan for CLARiiON systems on the subnet
or optionally supply the IP address of CLARiiON SP by clicking on Add/Remove Sys
tems" in the Domains View.
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
CLARiiON ADVANCE MANAGEMENT
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
Virtual Capacity:
% of virtual capacity that is allocated.
The threshold can be set between 50% - 80%. And the default is 70%.
Once a set threshold is reached an alert will be triggered.
Tiering Tab: is to automatically transfer data to different storage Tiers depen
ding on needs.
To delete a Pool it should be EMPTY.
** If the RAID Group is not fragmented then the Free Capacity & Largest Contiguo
us Free Space is IDENTICAL.
** Largest Contiguous Free Space:::-->Size of the largest LUN that can be creat
ed under this RAID Group.
Thin- if enabled THIN LUNs will be created on the pool which provides ondemand s
torage Allocation. Else a fully allocated Pool LUNs will be created.
NavisecCLI: Bind command is used to Create a LUN within existing RAID group.
getlun to view LUN Information. syntax: getlun <lun-no> <optional flags>
Access Logix:
Is a licensed software package that runs on each storage Processor. It controls
access to data on a host-by-host basis, allowing multiple hosts to have indivisu
al, independent views of a storage system.
Storage Group: Is a collection of one/more LUNs or Meta LUNs to which you connec
t 1/more servers.
Access Logix- Functionality
LUN Masking--
Access Logix makes certain LUNs from the hosts that are not authorised to see th
em, & presents those LUNs only to the authoeised servers.
It internally performs the mapping of Array LUN members "ALU" to host LIN member
s "HLUs"
It determines which physical addresses "Device numbers" each attached host uses
for its LUNs.
** When host agents stratup, shortly after host boot time, they sebd initiator i
nfo to all connected storage systems. This initiator info is stored in Access Lo
gix DB.
Access to LUN is controlled by info stored in the Access Logix Db, which is stor
ed in a reserved area of CLARiiON disk.
Initiator Registration Records: FC
Host name
Host IP Address
Host HBA WWNs
CLARiiON Port WWNs
OS type.
Initiator Registration Records: FC
-SCSI Names
iqn: Iscsi Qualified Names
eui: Extended Unique Identifier.
the access to the LUNs is controlled by ACL, which contains the 128 bit Globally
Unique ID of the LUN,
128 bit unique IDs for the HBAs in the host. (64 bit WWN + 64 bit WWPN)
128 bit CLARiiON SP Port WWN.
Limits:
* Any host may be connected to only one storage Group on any storage system.
* No host may be connected to more than 4 storage Groups. ie., a host can only u
se LUNs from four or less storage systems.
Host are identified by a IP address & Host names.
Automatic Registration: Process of making a host known to the storage system.
Access Logix, once installed & activated on a storage system, can only be disabl
ed through the CLI.
LUN MIGRATION: or Virtual LUN Migration allows data to be moved from one LUN to
another, regardless of RAID Type, disk type, speed & no of disks in the RAID gro
up.
It is supported in only CX-3 & CX-4.
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
Connectrix B- SERIES SWITCHES
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
Departmental Switches--8-64 ports, speed : 4/8Gbps
Multi-protocol Routers-- 16-18 ports, speed: 4Gbps
Departmental Switches-- upto 384 ports speed: 4/8/10Gbps
The 16/32/48 port FC blabes run @ 4 Gbps with ED48000 director & can run @ 8Gbps
when istalled in the ED-DCX director.
**CP blade for DCX chassis- 2 per DCX Chassis
**CR8 (core) Blade- purpose is to handle internal routing as well as support for
the new ICL connections. Internal frame routing is made available by utilizing
4 condor2 ASICs per blade.
Also on each blade, are 2 ICL ports, 0 & 1. Each provides 128Gbps of bandwidth f
or connection to another DCX chassis.
ED-DCX-8B WWN Card status LEDs:
DCX has 2 WWNs; WWN 0 on left & WWN 1 on the right. Status LEDs are visible thro
ugh the bezel & indicate power & fault conditions for each blade in the chassis.
WWN 0 WWN 1
LED Meanings LED Meanings
1-4 => Port Blades 7 => CP8 1
5 => CR8 Core 0 8 => CR8 Core 1
6 => CP8 0 9-12 => Port Blades
DCX -Port Side
12 total slots
- 8 Port Blade Slots (1-4, 9-12)
-2 CP Blades (6,7)
-2 Core blades (5,8)
--------------------------------------------------
B-Series Trunking:
the optional ISL Trunking feature allows inter-switch links b/w 2 Connetrix B mo
dels to merge into a logical link.
ISL Trunking reduces situations that require ststic traffic routes & indivisual
ISL management.
Trunking optimizes Fabric performance by distributing the traffic across the sha
red bandwidth of all the ISLs in trunking group.
Fibre Channel routing Serivces enables device connectivity across SAN boundaries
through logical SANs called LSANs.
* With FCRS, you can interconnect devices without having to redesign /reconfigur
e the enire fabric.
Integrated Routing Support:
Integrated Routing is a new licensed capability introduced in Fabric OS v6.1.
It allows any port in a ED-DCX-B with 8 Gig port blades, DS-5300B/ DS-5100B to b
e configured as an EX_port.
This eliminates the need to add a PB-48K-18i blade or use the MP-7500B for fibre
Channel routing purposes.
To manage M-Series Switch from the serial connection, connect the M-Series seria
l cable to the Comm port fortheswitch.
Create & open a HyperTerminal session using your COMM port as the connection met
hod
Set the connection properties to the following : 57600 bits per second, 8
Data bits, None for Parity, 1 Stop Bit, None for Flow control.
change the IP address, subnet mask & gateway address using the ipconfig <ipAddre
ss> <subnet-mask> <gateway IP-address> command.
Switchshow
fabricshow-to display all switches on a fabric.
portcfgshow-- to display permanent config for each port.
zoneshow- displays current zoning config.
Defined Config: these are all the zones & Zonecfg's saved on swi
tch
Effective Config: Displays the active zonecfg for the fabric.
2 discovery methods are supported in Connectrix Manager 9.0 & higher:
1) Individual Discovery-- uses IP addr to discover switches & result is displaye
d on "Selected individual Switches' discovery field.
2) Sub-net discovery-- it utilizes subnet broadcasts. all switches discovered us
ing subnet discovery are displayed in the selected subnets area.
For connectrix manager to manage a switch, it must be first discovered. From Dis
cover Menu, select Setup to add a range of ip addresses for discovery.
From Setup menu box select the Out-Of-band tab, then click on Add buttom @ the b
ottom of the box. Once the Address properties box appears, enter the IP address
& subnet mask. u can also enter Range of ip addr from this menu. under ADD MULT
IPLE option & mention last IP address.
Data Collection with Group Manager is the enabler, providing storage locations f
or the files.
To perform Data collections select Configure-> Group Manager from the CM Menu ba
r.
This will bring you to the wizard which guides you through the process & provide
s a progress status.
4 supportsave status options:
1) Not Started
2) In-Progress
3) completed
4) Failed
Zoning: In connectrix
To view/ configure zoning Select fabric wwn under ' Zoning Scope'.
To view active zone set for this fabric, select the Active Zoneset Tab.
If the zoneset is not in the Zoning Library of the connectrix Manager Server, an
error message will be displayed.
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
Connectrix MDS- SERIES SWITCHES
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
********************************************************************************
****************************
MDS 9000 Product Series:
The 9500 models are the director class & include the 9513, the 9509 & the 9506.
The 9200 models are single supervisor switches, they include the 9222i, & the le
gecy 9216 a or i.
The 9100 models are departmental switches & include the 9134 & 9124.---- 4 Gbps
MDS-9120 & 9140 2 Gbps
The Connectrix MDS 9513 is a 13 slot Director-class switch.
-- It has 11 slots available for mudules & 2 for supervisors.
-- Any module can be plcaed in any available slot.
-- It supports max of upto 528 ports.
-- It has 2 fan trays & 3 crossbar Fabric modules that plug in directly into the
back of chassis.
-- These crossbar perform the switching function handled by the supervisor modul
es on other MDS 9500 series director.
The MDS 9509 Director chassis has 9 slots.
-- Two slots, 5 & 6 are reserved for Supervisors;
-- the other 7 can contain any switching module.
-- Slots are numbered 1 thro 9, from top to bottom.
the MDS 9506 chassis has 6 slots
-- Two slots, 5 & 6 are reserved for Supervisors;
-- 2 PEM (Power Entry Modules)
-- upto 4 switching modules.
The MDS-9222i is a 2nd Generation MDS Chassis containing 1 Expansion slot.
-- It integrates 18 auto-sensing 4 Gbps FC ports, & 4 1Gbps Ethernet Ports & an
expansion slot into a single chassis.
-- Both supervisor & Line card functionality are supported in slot 1.
-- slot 1 is pre configured with 18 FC ports & 4 Ethernet Ports, leaving slot 2
available for any switching module.
-- Management connectivity is provided by the Management Ethernet Port.
The MDS-9134 offers upto 32 autosensing 4 gig FC ports & 2 -10Gbps in a single c
hassis.
-- Each fibre Channel port has a dedicaated bandwidth.
-- It has the flexibility to expand from the 24 port based model upto 32 ports i
n 8-port increments with use of license.
-- the 2 10 Gbps port supports a range of optics for ISL connections to other MD
S 10G port.
-- Both 10 G Ports must be activated with License that is independent from 4 gig
Port.
Generation -2 FC Modules:
Address key SAN consolidation requirements.
Port Group on 2nd Gen:
--Each portgroup has 12.8 gbps of internal bandwidth.
-- Any port group can be configured to have dedicated bandwidth @ 1, 2, or 4 Gbp
s.
-- All remaining ports in the portgroup share the remaining bandwidth.
--Any port in dedicated bandwith mode has access to extended buffers.
--Any port in shared BW mode has only 16 buffers.
-- Ports in shared mode cannot be used as ISLs.
--------------------------------------------------------------------------------
------------------------------------------------
VSAN Configuration::
A feature that leverages the advantages of isolated fabrics with the capabilitie
s that address the limitations of isolated san islands.
VSANs are configured by setting the following attributes:
VSAN ID -- identifies the VSAN by number ( 2-4093)
VSAN name-- Identifies the VSAN for mgmt purposes upto 32 chars long
Load Balancing-- S_ID/D_ID/OX_ID default
Identifies the load balancing param ifthe VSAN extends across ISLs
VSAN State-- Active (default)
VSAN Interface Membership
VSAN Trunking:
Allows multiple VSANs to share a common interface( Require TE port , Requires EI
SLs)
Interface may be a single EISL or a port channel.
VSAN merger
Each FC interface has an associated trunk-allowed VSAN list.
Zone:
All zones & zonesets are contained in each VSAN.
Each VSAN can have multiple Zone sets but only 1 zone set can be active at any g
iven time.
Each VSAN has afull zone DB and an active zone DB.
Only active Zone sets are distributed to other physical switches.
vi /etc/multipathd.conf
devnode_blacklist {
devnode
}
# cd /proc/scsi
#more scsi
Device bus scan on linux host-- As linux cant pickup newly provisioned devices w
ithout discruption to existing devices
the HBA driver module must be unloaded and reloaded inorder to create usable SCS
I devices for new LUNs
#/etc/init.d/naviagent stop # shutting down naviagent
#/etc/init.d/powerpath stop
# modeprobe -r lpfc #unload driver
#lsmod | grep lpfc
#modprobe lpfc #reload driver
#lsmod | grep lpfc
# /etc/init.d/powerpath start
#/etc/init.d/naviagent start
#
Persistent Binding for Linux devices
Native device names - "/dev/sdX " are not guaranteed to persist.
using fdisk to Align a Linux Partition
#iscsi -iname Linux iSCSI Commands
#iscsi -ls
********************************************************************************
ALERTS & EVENTS MONITOR
********************************************************************************
Alret Generation
Triggered by FLARE events
Triggered by configuration issues
Exceptions for GUI generated events
Simplifies Event Mgmt.
Event Monitor Architecture:
Event Monitor relies on confi file-- navimon.cfg which is a text file, which can
be edited manually.
Monitor Agents run on 1/more hosts/SP & watch over the systems.
2 Models:
Centralized monitoring model
distributed monitoring model
Event Monitor Templete
These define the events that trigger a response, and the response.
Each templete has a unique name
Responses to any of those specified events:
SMTP email
Pager
SNMP
EMC dial home
User specified application.
Severity: Critical Error, Error, Warning, Information
Messages are formatted according to a format string:
Ex: "Event%N% occured at %T%"
Navisphere Analyzer:
Helps us to analyze the performance information.
Examine past & present performance.
Performance Survey allows at a glance view of params that are over a preset thre
shold.
NA views:
8 object types with performance info.
Disk,sp,LUNs, metaLUN, SnapView Session, MirrorView/A mirror, RAID Group, SP por
t.
Abbrevated D,S,L,M,SS,S,RG,P
Analyser has 7 data views for traditional LUNs
3 for thin LUNs
-Performance overview
-Performance servey
-Performance summary
-Performance detail
-I/O Distribution summary
-I/O Distribution detail
-LUN I/O disk detail
Analyzer Polling & Logging:
Higher Polling rate is configurable-- Real time archieve interval from 60sec to
3600 sec. Default is 60.
**Rate @ which info gathering is performed can be configurable
Archive interval logging is configurable
from 60sec to 3600sec .Default is 120
user control of start/stop logging
NQM-Navisphere QOS Manager:
Define and achieve Service level goals for multiple appln.
Provide SLA thro user defined performance ploicies
NQM Queuing
NQM Control Engine
NQM Measuring
Control Methods:
Limits-- supports 32 I/O class /policy
Cruise Control-- supports only 2 I/O class per policy
Fixed Queue Length--
Features:
UI, Integrated Scheduler, Archiving Tools
Fall Back Policy:-> Backup policy if primary policy cannot be achieved.
NQM POLICY BUIDER STEPS::Select open->Select I/O classes-> Set goals->run
********************************************************************************
**********
Host Integration of ESX Provisioning:
********************************************************************************
*******
Initiator Registration- HBA-- is automatic without the need for NaviAgent
Lun Masking is performed @ clariion
Max luns that can be presented to a ESX is 256.
How iSCSI LUNs are discovered:
Static config
Send Targets
-- iSCSI device returns its target info as well as any additional target info th
at it knows about.
VMFS VOLUME?? Repository of VMs & VM state data
-Each VM config file is stored in its own subdirectory.
Repository for other files.
Addressed by a volume ID, a datastore name, & phy add
Accessible in the service console underneath /vmfs/volumes
Create a VMFS:
Select device location (iSCSI / FC LUN)
Specify Datastore name
Specify disk Capacity
Specify Max file size & allocation unit size
RDM Raw Device Mapping Volume::
Why use a raw LUN with a LUN?
Enables use of SAN mgmt s/w inside a guest OS
Allows VM clustering across boxes/ phy2virtual
An RDM allows a special file in a VMFS volume to act as a proxy for a raw device
.
An RDM allows a raw LUN to use snapshots.
FLARE Release 29
Features:
Virtualization aware navisphere--Enables Navishere to display relationship info
b/w VMware ESX Server, VMs, CLARiiON Luns
Drice spin down-- provides power savings ration of 55-60% an a per drive basis,
all 1TB SATA II drives are qualified for this.
10Gpbs iSCSI Support--
LUN Shrink-- reclaims unused storage space.
Virtual Provisioning (Phase 2 replication support)
VLAN Tagging
NQM, Analyzer and replication enhancements.
VLAN Tagging:
Enables admin to better analyze & tune an applns n/w performance.
Seggregation of I/O load by business unit for better chargeback options.
only specified hosts have acces to VLAN data stream.
Protocol 802.1Q
Allows communication with multiple VLANs over one physical cable.
VLAN ID is a 12 bit field.(1-4095)
Event Hearbeat :
Allows EMC to know when an array has stopped dialing home.
Send hearbeat event to EMC periodically from Distributed Monitor or Centralised
Monitor.
eg:
EventMonitor -template -destroy -templateName <Heartbeat templatename> -messner
SANP VIEW:::
********************************************************************************
******************************************
Goal:
Creation of PIT copies of data
Support for consistent online backup/dat replication
offload backup & other processing from production hosts.
Used in Testing, Decision Support scenarios, data replications.
SnapShots: Use Pointer based replication along with Copy on First Write technolo
gy.
it has 3 managed objects: Snapshots, session, Reserved LUN pool
Clones: Make full copies of source LUN.
3 managed objects: Clone Group, Clone Clone Private LUN.
Fracture Log -- a bit map which is kept in SP memory which tracks the changes on
source/clones.
8 copies of multiple PIT's are allowed on both types.
**consistent session start, consistent clone fracture-- for consistency of data
across multiple objects
comparison:
Acess to PIT copy
Clones need an initail full synchronization, which is time consuming
Snapshot data is immediately available for use.
Performance impact on source LUN:
Snapshots use CoFW, which increases responses times.
Fractured Clones are independent of their source LUNs
USe of disk Space:
A Clone occupies the same amount of space as its source LUNs.
A snapshot uses around 20% of space of its source LUN.
Recovery from Dataloss from Source LUN:
Snapshot depends on source LUN for operation
Completely independent of source LUN.
limits:
Objects per source LUN 8 clones 8 snapsots/8 sessions
Sorce LUNs /Storage systems upto 1024 for clones upto 512 snapshots
Objects /ss upto 2048 upto 2048 snapshots
Clone Groups /SS 1024 n/a
Reserved LUNs/SS n/a 512
Snapshot is an instantaneous frozen virtual copy of a LUN on a storage system.
Frozen: An unchanging PIT view.snapshot will not change unless the user writes t
o it.
Instantaneous: Created instantly-no data is copied at creation time.
Reserved LUN Pool::Area where we can put the original chunks of data before we m
odify them on source LUN.
Session: Mechanism that performs the actual tracking of data changes.
when we start a session, the COFW mechanism is enabled. And from that point on a
ny time a chunk is modified for the first time the data in that chunk will be sa
ved into the Reserved LUNpool
when we stop a session, No further Writes into the reserved LUN pool takes place
. when COFW Mechanism is disabled
any source LUN may have upto 8 snapview session at any time.
COFW Operation: allows efficient utilization of copy space.
- We one make 1 copy of dat which is changed & we make it on first time the data
changes
Chunks are fixed size of 64kB
Chunks are saved when they re modified for the first tme.
Reserved LUNS Recommendations:
total no of RL is limited upto 512 F28 release
they may be of diff sizes.
these are assigned as req-- nochecking of size, disk type or RAID type -- least
.2 % of the size ofthe sorce which is performed initially for first RL
can be traditional LUNs only. thn lins not allowed.
Avg source LUNsize = total size of source LUN/ No of src luns
PSM contains info regarding Reserved LUn & source luns involved in Snap session.
Its a perdefined memory.