Вы находитесь на странице: 1из 40

Create iSCSI LUNs

This procedure explains how to create an iSCSI target and an iSCSI


LUN on your Celerra® system. iSCSI (Internet Small Computer
Systems Interface) is a transport protocol for sending SCSI packets
over TCP/IP networks. iSCSI initiators and targets are the key
components in an iSCSI architecture. Initiators and targets are
devices (software or hardware) that package and transfer SCSI
information over an IP network.
◆ Overview ............................................................................................... 2
◆ Pre-implementation tasks ................................................................... 4
◆ Implementation worksheets ............................................................... 5
◆ Connect external network cables ....................................................... 9
◆ Configure storage for a Fibre Channel enabled system ............... 11
◆ Configure the network ...................................................................... 21
◆ Create a file system ............................................................................ 22
◆ Delete iSCSI LUN created during startup ...................................... 25
◆ Create iSCSI target ............................................................................. 26
◆ Create iSCSI LUN............................................................................... 28
◆ Configure hosts .................................................................................. 30
◆ Configure and test standby relationships....................................... 31
◆ Appendix............................................................................................. 38

Create iSCSI LUNs 1


Create iSCSI LUNs

Overview
Before you begin this iSCSI implementation procedure ensure that
you have completed the following tasks.

Procedure overview To create a iSCSI target and iSCSI LUN, you must perform the
following tasks:
1. Verify that you have performed the pre-implementation tasks:
• Create a Powerlink® account.
• Register your Celerra with EMC® or your service provider.
• Install Navisphere® Service Taskbar (NST.)
• Add additional disk array enclosures (DAEs) using the NST
(Not available for NX4).
2. Complete the implementation worksheets.
3. Cable additional Celerra ports to your network system.
4. Configure unused or new disks with Navisphere Express.
5. Configure your network by creating a new interface to access the
Celerra storage from a host or workstation
6. Create a file system using a system-defined storage pool.
7. Delete the iSCSI LUN created during startup.
8. Create iSCSI target.
9. Create iSCSI LUN.
10. Configure host access.
11. Configure and test standby relationship

Host requirements Software


for iSCSI For Celerra VSS Provider for iSCSI:
• Celerra Network Server version 5.5 or later
• A system running Windows Server 2003 with Service Pack 1,
Standard1, Enterprise, or Datacenter Edition

2 Create iSCSI LUNs


Create iSCSI LUNs

Note: A number of Microsoft hotfixes must be applied to the


Windows iSCSI host to correct problems with VSS. The Celerra
Network Server Release Notes list the required hotfixes.

For CBMCLI commands:


• Celerra Network Server version 5.5.27 or later
• Linux kernel 2.4: iSCSI initiator version 3.6.3 or later
• Linux kernel 2.6: iSCSI initiator version 4.0.1.11 or later
Hardware
• No specific hardware requirements
Network
• An Ethernet 10/100/1000 network with one or more iSCSI
hosts configured with the most recent version of the Microsoft
iSCSI Software Initiator.
Storage
• No specific storage requirements

3
Create iSCSI LUNs

Pre-implementation tasks
Before you begin this iSCSI implementation procedure ensure that
you have completed the following tasks.

Create a Powerlink You can create a Powerlink account at http://Powerlink.EMC.com.


account Use this website to access additional EMC resources, including
documentation, release notes, software updates, information about
EMC products, licensing, and service.

Register your system If you did not register your Celerra at the completion of the Celerra
with EMC Startup Assistant, you can do so now by downloading the
Registration wizard from Powerlink.
The Registration wizard can also be found on the Applications and
Tools CD that was shipped with your system.
Registering your Celerra ensures that EMC Customer Support has all
pertinent system and site information so they can properly assist you.

Download and The NST is available for download from the CLARiiON® Tools page
install the on Powerlink and on the Applications and Tools CD that was
Navisphere Service shipped with your system.
Taskbar (NST)

Add additional disk Use the NST to add new disk array enclosures (DAEs) to fully
array enclosures implement your Celerra (Not available for NX4).

4 Create iSCSI LUNs


Create iSCSI LUNs

Implementation worksheets
Before you begin this implementation procedure take a moment to fill
out the following implementation worksheets with the values of the
various devices you will need to create.

Create interface The New Network Interface wizard configures individual network
worksheet interfaces for the Data Movers. It can also create virtual network
devices: Link Aggregation, Fail-Safe Network, or Ethernet Channel.
Use Table 1 to complete the New Network Interface wizard. You will
need the following information:
Does the network use variable-length subnets?
❑ Yes ❑ No
Note: If the network uses variable-length subnets, be sure to record the
correct subnet mask. Do not assume 255.255.255.0 or other common values.

Table 1 Create interface worksheet

Maximum Virtual LAN


Data Device name or Transmission (VLAN)
Mover virtual device Unit (MTU) identifier
number name IP address Netmask (optional) (optional) Devices (optional)

5
Create iSCSI LUNs

Create file system The Create File System step creates a file system on a Data Mover.
worksheet This step can be repeated as needed to create additional file systems.
Read/Write Data Mover: ❑ server_2 ❑ server_3
Volume Management: ❑ Automatic (recommended)
Storage Pool for Automatic Volume Management:
❑ CLARiiON RAID 1 (Not available for NX4)
❑ CLARiiON RAID 5 Performance
❑ CLARiiON RAID 5 Economy
❑ CLARiiON RAID 1/0
❑ CLARiiON RAID 6
File System Name ____________________________________________
File System Size (megabytes) __________________________________
Use Default User and Group Quotas: ❑ Yes ❑ No
Hard Limit for User Storage (megabytes) ____________________
Soft Limit for User Storage (megabytes) _____________________
Hard Limit for User Files (files)_____________________________
Soft Limit for User Files (files) _____________________________
Hard Limit for Group Storage (megabytes) __________________
Soft Limit for Group Storage (megabytes) ___________________
Hard Limit for Group Files (files) ___________________________
Soft Limit for Group Files (files) ____________________________
Enforce Hard Limits: ❑ Yes ❑ No
Grace Period for Storage (days)_____________________________
Grace Period for Files (days) _______________________________

Create an iSCSI The Create an iSCSI Target step creates an iSCSI target on a Data
target worksheet Mover. This step can be repeated as needed to create additional iSCSI
targets.
Data Mover: ❑ server_2 ❑ server_3 server_4 ❑ server_5
Target Alias Name ___________________________________________

6 Create iSCSI LUNs


Create iSCSI LUNs

Auto Generate Target Qualified Name (recommended):


❑ Yes ❑ No
Target Portals _____________________________________
______________________________________
______________________________________

Create an iSCSI LUN The Create an iSCSI LUN step creates an iSCSI LUN on a Data
worksheet Mover. This step can be repeated as needed to create additional iSCSI
LUNs.
Data Mover: ❑ server_2 ❑ server_3 server_4 ❑ server_5
Target Name _________________________________________________
Target Portals
______________________________________
______________________________________
______________________________________
File System Name ____________________________________________
Create Multiple LUNs: ❑ Yes ❑ No
Number of LUNs to Create:____________________________________
LUNs:
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______

7
Create iSCSI LUNs

Size of LUN (megabytes) __________________________________


Initiators ______________________________________
_____________________________________
_____________________________________
Enable Multiple Access: ❑ Yes ❑ No
Challenge Handshake Authentication Protocol (CHAP) Initiators
______________________________________
_____________________________________
_____________________________________
Enable Reverse Authentication: ❑ Yes ❑ No
iSCSI Service Information:
iSNS Server ______________________________________________
Esi Port (optional)
When you have finished “Implementation worksheets” go to
“Connect external network cables” on page 9.

8 Create iSCSI LUNs


Create iSCSI LUNs

Connect external network cables


If you have not already done so, you will need to connect the desired
blade network ports to your network system.
Figure 1 shows the 4-port copper Ethernet X-blade’s network ports
for an NX4. They are labeled cge0-cge3. Cable these ports as desired.
Figure 2 on page 9 shows the 2-port copper Ethernet and 2-port
optical 10 GbE X-blade’s network ports for an NX4. They are labeled
cge0-cge1, and fxg0-fxg1. Cable these ports as desired.3

Internal
management module cge0 cge1 cge2 cge3

Com 1
Com 2

BE 0 BE 1 AUX 0 AUX 1
CIP-000560

Figure 1 4-port copper Ethernet X-blade

Internal
management module fxg0 fxg1 cge0 cge1

Com 1

Com 2

BE 0 BE 1 AUX 0 AUX 1
CNS-001256

Figure 2 2-port copper Ethernet and 2-port optical 10 GbE X-blade

Any advanced configuration of the external network ports is beyond


the scope of this implementation procedure. For more information
about the many network configuration options the Celerra system
supports, such as Ethernet channels, link aggregation, and FSNs,
refer to the Configuring and Managing Celerra Networking and

9
Create iSCSI LUNs

Configuring and Managing Celerra Network High Availability technical


modules for more information.
When you have finished “Connect external network cables,” go to
“Configure storage for a Fibre Channel enabled system” on page 11.

10 Create iSCSI LUNs


Create iSCSI LUNs

Configure storage for a Fibre Channel enabled system


This section details how to create additional storage for a NX4 Fibre
Channel enabled storage system using Navisphere Express.

Configure storage Configure storage with Navisphere Express by doing the following:
with Navisphere 1. To start Navisphere Express, open an internet browser such as
Express Internet Explorer or Mozilla Firefox.
2. Type the IP address of a storage processor of the storage system
into the internet browser address bar.

Note: This IP address is the one that you assigned when you initialized
the storage system.

3. Type the user name and password to log in to Navisphere


Express, as shown in Figure 3 on page 12.

Note: The default username is nasadmin and the default password is


nasadmin.

11
Create iSCSI LUNs

Figure 3 Navisphere Express Login screen

4. To configure unused storage, select Disk Pools in the left


navigation panel from the initial screen shown in Figure 4 on
page 13.

12 Create iSCSI LUNs


Create iSCSI LUNs

Figure 4 Manage Virtual Disks screen

Note: If you are trying to create a new virtual disk (LUN) for Automatic
Volume Management (AVM) to use in a stripe with existing virtual disks,
the new virtual disk must match the size of the existing virtual disks.
Find the information on the existing virtual disks by going to the details
page for each virtual disk by selecting Manage > Virtual Disks >
<Existing_Virtual_Disk_Name>. Record the MB value of the existing
virtual disks and use this value as the size for any new virtual disk.

5. Click Create New Disk Pool, as shown in Figure 5 on page 14.

13
Create iSCSI LUNs

Figure 5 Manage Disk Pools screen

Note: You should create at least two disk pools. The software assigns
each disk pool that you create to an SP as follows: Disk Pool 1 to SP A,
Disk Pool 2 to SP B, Disk Pool 3 to SP A, Disk Pool 4 to SP B, and so on.
All virtual disks that you create on a disk pool are automatically assigned
to the same SP as the disk pool. If you create only one disk pool on the
storage system, all virtual disks on the storage system are assigned to SP
A and all data received, or sent, goes through SP A.

14 Create iSCSI LUNs


Create iSCSI LUNs

6. Select the RAID group type for the new disk pool, as shown in
Figure 6 on page 15.
• The RAID Group Type values should be applicable to your
system.
• For more information, see NAS Support Matrix document on
http://Powerlink.EMC.com.
Note: RAID5 is recommended.

Figure 6 Create Disk Pool screen

7. Select the disks in the Disk Processor Enclosure to include in the


new disk pool, as shown in Figure 6.

15
Create iSCSI LUNs

8. Click Apply.
9. Click Create a virtual disk that can be assigned to a server.
10. Select the disk pool just created, as shown in Figure 6.
11. Type the Name for the new virtual disk(s), and select its Capacity
and the Number of Virtual Disks to create, as shown in Figure 7
on page 16.

Note: It is recommended that virtual disk capacity not be larger than 2


TB.

Figure 7 Create Virtual Disks screen

16 Create iSCSI LUNs


Create iSCSI LUNs

12. Assign a server to the virtual disk(s) by using the Server list box,
as shown in Figure 7.

Note: To send data to or receive data from a virtual disk, you must assign
a server to the virtual disk.

13. Click Apply to create virtual disk(s).

Note: The system now creates the virtual disks. This may take some time
depending on the size of the virtual disks.

14. Select Virtual Disks from the left navigation panel, to verify the
creation of the new virtual disk(s).
15. Verify the virtual disk server assignment, by looking under
Assigned To on the Manage Virtual Disks page, as shown in
Figure 8.

17
Create iSCSI LUNs

Figure 8 Verify new virtual disk assignment

16. To make the new virtual disks (LUNs) available to the Celerra
system, Celerra Manager must be used. Launch the Celerra
Manager by opening Celerra Manager using the following URL:
https://<control_station>
where <control_station> is the hostname or IP address of the
Control Station.
17. If a security alert appears about the system’s security certificate,
click Yes to proceed.

18 Create iSCSI LUNs


Create iSCSI LUNs

18. At the login prompt, log in as user root. The default password is
nasadmin.
19. If a security warning appears about the system’s security
certificate being issued by an untrusted source, click Yes to accept
the certificate.
20. If a warning about a hostname mismatch appears, click Yes.
21. On the Celerra > Storage Systems page, click Rescan, as shown
in Figure 9 on page 19.

Figure 9 Rescan Storage System in Celerra Manager

19
Create iSCSI LUNs

! CAUTION
Do not change the host LUN (virtual disk) identifier of the Celerra
LUNs (virtual disks) after rescanning. This may cause data loss or
unavailability.

22. The user virtual disks (LUNs) are now available for the Celerra
system.
When you have finished the “Configure storage for a Fibre Channel
enabled system” go to “Configure the network” on page 21.

20 Create iSCSI LUNs


Create iSCSI LUNs

Configure the network


Using Celerra Manager, you can create interfaces on devices that are
not part of a virtual device. Host or workstation access to the Celerra
storage is configured by creating a network interface.

Note: You cannot create a new interface for a Data Mover while the Data
Mover is failed over to its standby.

In Celerra Manager, configure a new network interface and device by


doing the following:
1. Log in to Celerra Manager as root.
2. Click Celerras > <Celerra_name> > Wizards.
3. Click New Network Interface wizard to set up a new network
interface. This wizard can also be used to create a new virtual
device, if desired.

Note: On the Select/Create a network device page, click Create Device to


create a new virtual network device. The new virtual device can be
configured with one of the following high-availability features: Ethernet
Channel, Link Aggregation, or Fail-Safe Network.

Once you have completed the New Network Interface wizard and
successfully created a new network interface and an optional new
virtual device, go to “Create a file system” on page 22.

21
Create iSCSI LUNs

Create a file system


To create a new file system, do the following steps:
1. Go to Celerras > <Celerra_name> > File Systems tab in the left
navigation menu.
2. Click New at the bottom of the File Systems screen, as shown in
Figure 10.

Figure 10 File Systems screen

3. Select the Storage Pool radio button to select where the file
system will be created from, as shown in Figure 11 on page 23.

22 Create iSCSI LUNs


Create iSCSI LUNs

Figure 11 Create new File System screen

4. Name the file system.


5. Select the system-defined storage pool from the Storage Pool
drop-down menu.

Note: Based on the disks and the RAID types created in the storage
system, different system-defined storage pools will appear in the storage
pool list. For more information about system-defined storage pools refer
to the “Disk group and disk volume configurations” on page 38.

6. Designate the Storage Capacity of the file system and select any
other desired options.

23
Create iSCSI LUNs

Other file system options are listed below:


• Auto Extend Enabled: If enabled, the file system
automatically extends when the high water mark is reached.
• Virtual Provisioning Enabled: This option can only be used
with automatic file system extension and together they let you
grow the file system as needed.
• File-level Retention (FLR) Capability: If enabled, it is
persistently marked as an FLR file system until it is deleted.
File systems can be enabled with FLR capability only at
creation time.
7. Click Create. The new file system will now appear on the File
Systems screen, as shown in Figure 12 on page 24.
.

Figure 12 File System screen with new file system

24 Create iSCSI LUNs


Create iSCSI LUNs

Delete iSCSI LUN created during startup


You may have optionally created a iSCSI LUN using the Celerra
Startup Assistant (CSA). If you have a minimum configuration of five
or less disks, then you can begin to use this share as a production
share. If you have more than five disks, delete the iSCSI LUN created
during startup.
1. To delete the iSCSI LUN created during startup and make the file
system unavailable to iSCSI users on the network:
a. Click the iSCSI LUNs tab (Celerras > <Celerra_name> >
iSCSI).
b. Select one or more exports to delete, and click Delete.
The Confirm Delete page appears.
2. Click OK.
When you have deleted the iSCSI LUN created during startup go to
“Create iSCSI target” on page 26.

25
Create iSCSI LUNs

Create iSCSI target


To create a new iSCSI target do the following:
1. Go to Celerras > <Celerra_name> > iSCSI and click the Targets
tab.
2. Click New as shown in Figure 13.

Figure 13 Create a new iSCSI target

3. Select Data Mover as shown in Figure 14 on page 27.


4. Enter a name for the iSCSI target as shown in Figure 14 on
page 27.
5. Optionally create an iSCSI Qualified Target Name as shown in
Figure 14 on page 27.

26 Create iSCSI LUNs


Create iSCSI LUNs

6. Optionally select network portals as shown in Figure 14 on


page 27.
7. Click OK as shown in as shown in Figure 14.

Figure 14 Configure new iSCSI target

When you have completed “Create iSCSI target” go to “Create iSCSI


LUN” on page 28.

Create iSCSI target 27


Create iSCSI LUNs

Create iSCSI LUN


To create a new iSCSI LUN do the following:
1. Go to Celerras > <Celerra_name> > iSCSI and click the LUNs tab.
2. Click New as shown in Figure 15.

Figure 15 Create new iSCSI LUN

3. Select Data Mover as shown in Figure 16 on page 29.


4. Select iSCSI target as shown in Figure 16 on page 29.
5. Enter a LUN number above 16 as shown in Figure 16 on page 29.
6. Specify LUN size as shown in Figure 16 on page 29.
7. Specify if Read only as shown in Figure 16 on page 29.

28 Create iSCSI LUNs


Create iSCSI LUNs

8. Click OK as shown in Figure 16.

Figure 16 Configure iSCSI LUN

When you have completed “Create iSCSI LUN” go to “Configure


hosts” on page 30.

Create iSCSI LUN 29


Create iSCSI LUNs

Configure hosts
Refer to Installing iSCSI Host Components technical module for
information about configuring iSCSI host and for more information
about implementing iSCSI.
Listed below are rough outlines for management tasks for various OS
hosts:
Management tasks for Windows hosts:
• Installing Celerra host components for Windows
• Setting up the Microsoft iSCSI Initiator
• Configuring iSCSI LUNs as disk drives in Windows
• Using Celerra iSCSI host components for Windows
Management tasks for Linux hosts:
• Installing Celerra iSCSI host components for Linux
• Set up the Linux iSCSI initiator
• Configure CHAP authentication for CBMCLI operations
• Configure iSCSI LUNs as disk drives in Linux
• Using Celerra iSCSI host components for Linux
Management tasks for AIX hosts:
• lnstalling Celerra host components for AIX
• Set up the IBM AIX iSCSI initiator
• Troubleshooting Celerra iSCSI host component problems

30 Create iSCSI LUNs


Create iSCSI LUNs

Configure and test standby relationships


EMC recommends that multi-blade Celerra systems be configured
with a Primary/Standby blade failover configuration to ensure data
availability in the case of a blade (server/Data Mover) fault.
Creating a standby blade ensures continuous access to file systems on
the Celerra storage system. When a primary blade fails over to a
standby, the standby blade assumes the identity and functionality of
the failed blade and functions as the primary blade until the faulted
blade is healthy and manually failed back to functioning as the
primary blade.

Configure a standby A blade must first be configured as a standby for one or more
relationship primary blades for that blade to function as a standby blade, when
required.
To configure a standby blade:
1. Determine the ideal blade failover configuration for the Celerra
system based on site requirements and EMC recommendations.
EMC recommends a minimum of one standby blade for up to
three Primary blades.

! CAUTION
The standby blade(s) must have the same network capabilities
(NICs and cables) as the primary blades with which it will be
associated. This is because the standby blade will assume the
faulted primary blade’s network identity (NIC IP and MAC
addresses), storage identity (controlled file systems), and
service identity (controlled shares and exports).

2. Define the standby configuration using Celerra Manager


following the blade standby configuration recommendation:
a. Select <Celerra_name> > Data Movers >
<desired_primary_blade> from the left-hand navigation panel.
b. On the Data Mover Properties screen, configure the standby
blade for the selected primary blade by checking the box of the
desired Standby Mover and define the Failover Policy.

Configure and test standby relationships 31


Create iSCSI LUNs

Figure 17 Configure a standby in Celerra Manager

Note: A failover policy is a predetermined action that the Control


Station invokes when it detects a blade failure based on the failover
policy type specified. It is recommended that the Failover Policy be
set to auto.

c. Click Apply.

Note: The blade configured as standby will now reboot.

32 Create iSCSI LUNs


Create iSCSI LUNs

d. Repeat for each primary blade in the Primary/Standby


configuration.

Test the standby It is recommended that the functionality of the blade failover
configuration configuration be tested prior to the system going into production.
When a failover condition occurs, the Celerra is able to transfer
functionality from the primary blade to the standby blade without
disrupting file system availability
For a standby blade to successfully stand-in as a primary blade, the
blades must have the same network connections (Ethernet and Fibre
Cables), network configurations (EtherChannel, Fail Safe Network,
High Availability, and so forth), and switch configuration (VLAN
configuration, etc).

! CAUTION
You must cable the failover blade identically to its primary blade.
If configured network ports are left uncabled when a failover
occurs, access to files systems will be disrupted.

To test the failover configuration, do the following:


1. Open a SSH session to the Control Station with an SSH client like
PuTTY using the CS.
2. Log in to the CS as nasadmin. Change to the root user by
entering the following command:
su root

Note: The default password for root is nasadmin.

3. Collect the current names and types of the system blades:


# nas_server -l
Sample output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3

Note: In the command output above provides the state name, the
names of the blades. Also, the type column designates the blade type
as 1 (primary) and 4 (standby).

Configure and test standby relationships 33


Create iSCSI LUNs

4. After I/O traffic is running on the primary blade’s network


port(s), monitor this traffic by entering:
# server_netstat <server_name> -i
Example:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

****************************************************************************

fxg0 9000 0 0 0 0 0:60:16:32:4a:30


fxg1 9000 0 0 0 0 0:60:16:32:4a:31
mge0 9000 851321 0 812531 0 0:60:16:2c:43:2
mge1 9000 28714095 0 1267209 0 0:60:16:2c:43:1
cge0 9000 614247 0 2022 0 0:60:16:2b:49:12
cge1 9000 0 0 0 0 0:60:16:2b:49:13

5. Manually force a graceful failover of the primary blade to the


standby blade by using the following command:
# server_standby <primary_blade> -activate mover

Example:
[nasadmin@rtpplat11cs0 ~]$ server_standby server_2
-activate mover

server_2 :
server_2 : going offline
server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done

server_2 : renamed as server_2.faulted.server_3


server_3 : renamed as server_2

Note: This command will rename the primary and standby blades. In the
example above, server_2, the primary blade, was rebooted and renamed
server_2.faulter.server_3 and server_3 was renamed as server_2.

34 Create iSCSI LUNs


Create iSCSI LUNs

6. Verify that the failover has completed successfully by:


a. Checking that the blades have changed names and types:
# nas_server -l
Sample output:
id type acl slot groupID state name
1 1 1000 2 0 server_2.faulted.server_3
2 1 1000 3 0 server_2

Note: In the command output above each blade’s state name has
changed and the type column designates both blades as type 1
(primary).

b. Checking I/O traffic is flowing to the primary blade by


entering:
# server_netstat <server_name> -i

Note: The primary blade, though physically a different blade, retains


the initial name.

Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

****************************************************************************
fxg0 9000 0 0 0 0 0:60:16:32:4b:18
fxg1 9000 0 0 0 0 0:60:16:32:4b:19
mge0 9000 14390362 0 786537 0 0:60:16:2c:43:30
mge1 9000 16946 0 3256 0 0:60:16:2c:43:31
cge0 9000 415447 0 3251 0 0:60:16:2b:49:12
cge1 9000 0 0 0 0 0:60:16:2b:48:ad

Note: The WWNs in the PhysAddr column have changed, thus


reflecting that the failover completed successfully.

7. Verify that the blades appear with reason code 5 by typing;


# /nas/sbin/getreason

Configure and test standby relationships 35


Create iSCSI LUNs

8. After the blades appear with reason code 5, manually restore the
failed over blade to its primary status by typing the following
command:
# server_standby <primary_blade> -restore mover
Example:
server_standby server_2 -restore mover

server_2 :
server_2 : going standby
server_2.faulted.server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done

server_2 : renamed as server_3


server_2.faulted.server_3 : renamed as server_2

Note: This command will rename the primary and standby blades. In the
example above, server_2, the standing primary blade, was rebooted and
renamed server_3 and server_2.faulter.server_3 was renamed as
server_2.

9. Verify that the failback has completed successfully by:


a. Checking that the blades have changed back to the original
name and type:
# nas_server -l
Sample output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3

36 Create iSCSI LUNs


Create iSCSI LUNs

b. Checking I/O traffic flowing to the primary blade by entering:


# server_netstat <server_name> -i
Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i

Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

****************************************************************************

fxg0 9000 0 0 0 0 0:60:16:32:4a:30


fxg1 9000 0 0 0 0 0:60:16:32:4a:31
mge0 9000 851321 0 812531 0 0:60:16:2c:43:2
mge1 9000 28714095 0 1267209 0 0:60:16:2c:43:1
cge0 9000 314427 0 1324 0 0:60:16:2b:49:12
cge1 9000 0 0 0 0 0:60:16:2b:49:13

Note: The WWNs in the PhysAddr column have reverted to their


original values, thus reflecting that the failback completed
successfully.

Refer to the Configuring Standbys on EMC Celerra technical module


http://Powerlink.EMC.com for more information about determining
and defining blade standby configurations.

Configure and test standby relationships 37


Create iSCSI LUNs

Appendix
This appendix provides additional information about the disk groups
and volume configurations based on the system’s drive attach types.

Disk group and disk Table 2 maps a disk group type to a storage profile, associating the
volume RAID type and the storage space that results in the automatic volume
configurations management (AVM) pool. The storage profile name is a set of rules
used by AVM to determine what type of disk volumes to use to
provide storage for the pool.

Table 2 Disk group and disk volume configurations

Disk group type Attach type Storage profile

RAID 5 8+1 Fibre Channel clar r5 economy (8+1)


clar_r5_performance (4+1)

RAID 5 4+1 Fibre Channel clar_r5_performance


RAID 1 clar_r1

RAID 5 4+1 Fibre Channel clar_r5_performance

RAID 1 Fibre Channel clar_r1

RAID 6 4+2 Fibre Channel clar_r6


RAID 6 12+2

RAID 5 6+1 ATA clarata_archive

RAID 5 4+1 ATA clarata_archive


(CX3 only)

RAID 3 4+1 ATA clarata_r3


RAID 3 8+1

RAID 6 4+2 ATA clarata_r6


RAID 6 12+2

RAID 5 6+1 LCFC clarata_archive


(CX3 only)

RAID 5 4+1 LCFC clarata_archive


(CX3 only)

RAID 3 4+1 LCFC clarata_r3


RAID 3 8+1

38 Create iSCSI LUNs


Create iSCSI LUNs

Table 2 Disk group and disk volume configurations (continued)

Disk group type Attach type Storage profile

RAID 6 4+2 LCFC clarata_r6


RAID 6 12+2

RAID 5 2+1 SATA clarata_archive

RAID 5 3+1 SATA clarata_archive

RAID 5 4+1 SATA clarata_archive

RAID 5 5+1 SATA clarata_archive

RAID 1/0 (2 disks) SATA clarata_r10

RAID 6 4+2 SATA clarata_r6

RAID 5 2+1 SAS clarsas_archive

RAID 5 3+1 SAS clarsas_archive

RAID 5 4+1 SAS clarsas_archive

RAID 5 5+1 SAS clarsas_archive

RAID 1/0 (2 disks) SAS clarsas_r10

RAID 6 4+2 SAS clarsas_r6

Configure and test standby relationships 39


Create iSCSI LUNs

40 Create iSCSI LUNs

Вам также может понравиться