Академический Документы
Профессиональный Документы
Культура Документы
Overview
Before you begin this iSCSI implementation procedure ensure that
you have completed the following tasks.
Procedure overview To create a iSCSI target and iSCSI LUN, you must perform the
following tasks:
1. Verify that you have performed the pre-implementation tasks:
• Create a Powerlink® account.
• Register your Celerra with EMC® or your service provider.
• Install Navisphere® Service Taskbar (NST.)
• Add additional disk array enclosures (DAEs) using the NST
(Not available for NX4).
2. Complete the implementation worksheets.
3. Cable additional Celerra ports to your network system.
4. Configure unused or new disks with Navisphere Express.
5. Configure your network by creating a new interface to access the
Celerra storage from a host or workstation
6. Create a file system using a system-defined storage pool.
7. Delete the iSCSI LUN created during startup.
8. Create iSCSI target.
9. Create iSCSI LUN.
10. Configure host access.
11. Configure and test standby relationship
3
Create iSCSI LUNs
Pre-implementation tasks
Before you begin this iSCSI implementation procedure ensure that
you have completed the following tasks.
Register your system If you did not register your Celerra at the completion of the Celerra
with EMC Startup Assistant, you can do so now by downloading the
Registration wizard from Powerlink.
The Registration wizard can also be found on the Applications and
Tools CD that was shipped with your system.
Registering your Celerra ensures that EMC Customer Support has all
pertinent system and site information so they can properly assist you.
Download and The NST is available for download from the CLARiiON® Tools page
install the on Powerlink and on the Applications and Tools CD that was
Navisphere Service shipped with your system.
Taskbar (NST)
Add additional disk Use the NST to add new disk array enclosures (DAEs) to fully
array enclosures implement your Celerra (Not available for NX4).
Implementation worksheets
Before you begin this implementation procedure take a moment to fill
out the following implementation worksheets with the values of the
various devices you will need to create.
Create interface The New Network Interface wizard configures individual network
worksheet interfaces for the Data Movers. It can also create virtual network
devices: Link Aggregation, Fail-Safe Network, or Ethernet Channel.
Use Table 1 to complete the New Network Interface wizard. You will
need the following information:
Does the network use variable-length subnets?
❑ Yes ❑ No
Note: If the network uses variable-length subnets, be sure to record the
correct subnet mask. Do not assume 255.255.255.0 or other common values.
5
Create iSCSI LUNs
Create file system The Create File System step creates a file system on a Data Mover.
worksheet This step can be repeated as needed to create additional file systems.
Read/Write Data Mover: ❑ server_2 ❑ server_3
Volume Management: ❑ Automatic (recommended)
Storage Pool for Automatic Volume Management:
❑ CLARiiON RAID 1 (Not available for NX4)
❑ CLARiiON RAID 5 Performance
❑ CLARiiON RAID 5 Economy
❑ CLARiiON RAID 1/0
❑ CLARiiON RAID 6
File System Name ____________________________________________
File System Size (megabytes) __________________________________
Use Default User and Group Quotas: ❑ Yes ❑ No
Hard Limit for User Storage (megabytes) ____________________
Soft Limit for User Storage (megabytes) _____________________
Hard Limit for User Files (files)_____________________________
Soft Limit for User Files (files) _____________________________
Hard Limit for Group Storage (megabytes) __________________
Soft Limit for Group Storage (megabytes) ___________________
Hard Limit for Group Files (files) ___________________________
Soft Limit for Group Files (files) ____________________________
Enforce Hard Limits: ❑ Yes ❑ No
Grace Period for Storage (days)_____________________________
Grace Period for Files (days) _______________________________
Create an iSCSI The Create an iSCSI Target step creates an iSCSI target on a Data
target worksheet Mover. This step can be repeated as needed to create additional iSCSI
targets.
Data Mover: ❑ server_2 ❑ server_3 server_4 ❑ server_5
Target Alias Name ___________________________________________
Create an iSCSI LUN The Create an iSCSI LUN step creates an iSCSI LUN on a Data
worksheet Mover. This step can be repeated as needed to create additional iSCSI
LUNs.
Data Mover: ❑ server_2 ❑ server_3 server_4 ❑ server_5
Target Name _________________________________________________
Target Portals
______________________________________
______________________________________
______________________________________
File System Name ____________________________________________
Create Multiple LUNs: ❑ Yes ❑ No
Number of LUNs to Create:____________________________________
LUNs:
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
Size of LUN (megabytes) ___________________________________
LUN Number (automatically assigned for multiple LUNs) ______
7
Create iSCSI LUNs
Internal
management module cge0 cge1 cge2 cge3
Com 1
Com 2
BE 0 BE 1 AUX 0 AUX 1
CIP-000560
Internal
management module fxg0 fxg1 cge0 cge1
Com 1
Com 2
BE 0 BE 1 AUX 0 AUX 1
CNS-001256
9
Create iSCSI LUNs
Configure storage Configure storage with Navisphere Express by doing the following:
with Navisphere 1. To start Navisphere Express, open an internet browser such as
Express Internet Explorer or Mozilla Firefox.
2. Type the IP address of a storage processor of the storage system
into the internet browser address bar.
Note: This IP address is the one that you assigned when you initialized
the storage system.
11
Create iSCSI LUNs
Note: If you are trying to create a new virtual disk (LUN) for Automatic
Volume Management (AVM) to use in a stripe with existing virtual disks,
the new virtual disk must match the size of the existing virtual disks.
Find the information on the existing virtual disks by going to the details
page for each virtual disk by selecting Manage > Virtual Disks >
<Existing_Virtual_Disk_Name>. Record the MB value of the existing
virtual disks and use this value as the size for any new virtual disk.
13
Create iSCSI LUNs
Note: You should create at least two disk pools. The software assigns
each disk pool that you create to an SP as follows: Disk Pool 1 to SP A,
Disk Pool 2 to SP B, Disk Pool 3 to SP A, Disk Pool 4 to SP B, and so on.
All virtual disks that you create on a disk pool are automatically assigned
to the same SP as the disk pool. If you create only one disk pool on the
storage system, all virtual disks on the storage system are assigned to SP
A and all data received, or sent, goes through SP A.
6. Select the RAID group type for the new disk pool, as shown in
Figure 6 on page 15.
• The RAID Group Type values should be applicable to your
system.
• For more information, see NAS Support Matrix document on
http://Powerlink.EMC.com.
Note: RAID5 is recommended.
15
Create iSCSI LUNs
8. Click Apply.
9. Click Create a virtual disk that can be assigned to a server.
10. Select the disk pool just created, as shown in Figure 6.
11. Type the Name for the new virtual disk(s), and select its Capacity
and the Number of Virtual Disks to create, as shown in Figure 7
on page 16.
12. Assign a server to the virtual disk(s) by using the Server list box,
as shown in Figure 7.
Note: To send data to or receive data from a virtual disk, you must assign
a server to the virtual disk.
Note: The system now creates the virtual disks. This may take some time
depending on the size of the virtual disks.
14. Select Virtual Disks from the left navigation panel, to verify the
creation of the new virtual disk(s).
15. Verify the virtual disk server assignment, by looking under
Assigned To on the Manage Virtual Disks page, as shown in
Figure 8.
17
Create iSCSI LUNs
16. To make the new virtual disks (LUNs) available to the Celerra
system, Celerra Manager must be used. Launch the Celerra
Manager by opening Celerra Manager using the following URL:
https://<control_station>
where <control_station> is the hostname or IP address of the
Control Station.
17. If a security alert appears about the system’s security certificate,
click Yes to proceed.
18. At the login prompt, log in as user root. The default password is
nasadmin.
19. If a security warning appears about the system’s security
certificate being issued by an untrusted source, click Yes to accept
the certificate.
20. If a warning about a hostname mismatch appears, click Yes.
21. On the Celerra > Storage Systems page, click Rescan, as shown
in Figure 9 on page 19.
19
Create iSCSI LUNs
! CAUTION
Do not change the host LUN (virtual disk) identifier of the Celerra
LUNs (virtual disks) after rescanning. This may cause data loss or
unavailability.
22. The user virtual disks (LUNs) are now available for the Celerra
system.
When you have finished the “Configure storage for a Fibre Channel
enabled system” go to “Configure the network” on page 21.
Note: You cannot create a new interface for a Data Mover while the Data
Mover is failed over to its standby.
Once you have completed the New Network Interface wizard and
successfully created a new network interface and an optional new
virtual device, go to “Create a file system” on page 22.
21
Create iSCSI LUNs
3. Select the Storage Pool radio button to select where the file
system will be created from, as shown in Figure 11 on page 23.
Note: Based on the disks and the RAID types created in the storage
system, different system-defined storage pools will appear in the storage
pool list. For more information about system-defined storage pools refer
to the “Disk group and disk volume configurations” on page 38.
6. Designate the Storage Capacity of the file system and select any
other desired options.
23
Create iSCSI LUNs
25
Create iSCSI LUNs
Configure hosts
Refer to Installing iSCSI Host Components technical module for
information about configuring iSCSI host and for more information
about implementing iSCSI.
Listed below are rough outlines for management tasks for various OS
hosts:
Management tasks for Windows hosts:
• Installing Celerra host components for Windows
• Setting up the Microsoft iSCSI Initiator
• Configuring iSCSI LUNs as disk drives in Windows
• Using Celerra iSCSI host components for Windows
Management tasks for Linux hosts:
• Installing Celerra iSCSI host components for Linux
• Set up the Linux iSCSI initiator
• Configure CHAP authentication for CBMCLI operations
• Configure iSCSI LUNs as disk drives in Linux
• Using Celerra iSCSI host components for Linux
Management tasks for AIX hosts:
• lnstalling Celerra host components for AIX
• Set up the IBM AIX iSCSI initiator
• Troubleshooting Celerra iSCSI host component problems
Configure a standby A blade must first be configured as a standby for one or more
relationship primary blades for that blade to function as a standby blade, when
required.
To configure a standby blade:
1. Determine the ideal blade failover configuration for the Celerra
system based on site requirements and EMC recommendations.
EMC recommends a minimum of one standby blade for up to
three Primary blades.
! CAUTION
The standby blade(s) must have the same network capabilities
(NICs and cables) as the primary blades with which it will be
associated. This is because the standby blade will assume the
faulted primary blade’s network identity (NIC IP and MAC
addresses), storage identity (controlled file systems), and
service identity (controlled shares and exports).
c. Click Apply.
Test the standby It is recommended that the functionality of the blade failover
configuration configuration be tested prior to the system going into production.
When a failover condition occurs, the Celerra is able to transfer
functionality from the primary blade to the standby blade without
disrupting file system availability
For a standby blade to successfully stand-in as a primary blade, the
blades must have the same network connections (Ethernet and Fibre
Cables), network configurations (EtherChannel, Fail Safe Network,
High Availability, and so forth), and switch configuration (VLAN
configuration, etc).
! CAUTION
You must cable the failover blade identically to its primary blade.
If configured network ports are left uncabled when a failover
occurs, access to files systems will be disrupted.
Note: In the command output above provides the state name, the
names of the blades. Also, the type column designates the blade type
as 1 (primary) and 4 (standby).
****************************************************************************
Example:
[nasadmin@rtpplat11cs0 ~]$ server_standby server_2
-activate mover
server_2 :
server_2 : going offline
server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
Note: This command will rename the primary and standby blades. In the
example above, server_2, the primary blade, was rebooted and renamed
server_2.faulter.server_3 and server_3 was renamed as server_2.
Note: In the command output above each blade’s state name has
changed and the type column designates both blades as type 1
(primary).
Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i
****************************************************************************
fxg0 9000 0 0 0 0 0:60:16:32:4b:18
fxg1 9000 0 0 0 0 0:60:16:32:4b:19
mge0 9000 14390362 0 786537 0 0:60:16:2c:43:30
mge1 9000 16946 0 3256 0 0:60:16:2c:43:31
cge0 9000 415447 0 3251 0 0:60:16:2b:49:12
cge1 9000 0 0 0 0 0:60:16:2b:48:ad
8. After the blades appear with reason code 5, manually restore the
failed over blade to its primary status by typing the following
command:
# server_standby <primary_blade> -restore mover
Example:
server_standby server_2 -restore mover
server_2 :
server_2 : going standby
server_2.faulted.server_3 : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
Note: This command will rename the primary and standby blades. In the
example above, server_2, the standing primary blade, was rebooted and
renamed server_3 and server_2.faulter.server_3 was renamed as
server_2.
****************************************************************************
Appendix
This appendix provides additional information about the disk groups
and volume configurations based on the system’s drive attach types.
Disk group and disk Table 2 maps a disk group type to a storage profile, associating the
volume RAID type and the storage space that results in the automatic volume
configurations management (AVM) pool. The storage profile name is a set of rules
used by AVM to determine what type of disk volumes to use to
provide storage for the pool.