You are on page 1of 38

Create an NFS export

This procedure explains how to create a Network File System (NFS) export on your Celerra system. The Celerra system is a multiprotocol machine that can provide access to data through the NFS protocol to provide file sharing in network environments. The NFS protocol enables the Celerra Network Server to assume the functions of an NFS server. NFS environments typically include: Native UNIX clients Linux clients Windows systems configured with third-party applications that provide NFS client services

Overview ............................................................................................... 2 Pre-implementation tasks ................................................................... 4 Implementation worksheets ............................................................... 5 Connect external network cables ....................................................... 7 Configure storage for a Fibre Channel enabled system ................. 9 Configure the network ...................................................................... 19 Create a file system ............................................................................ 20 Delete the NFS export created during startup............................... 23 Create NFS exports ............................................................................ 24 Configure hosts .................................................................................. 28 Configure and test standby relationships....................................... 29 Appendix............................................................................................. 36

Create an NFS export

Create an NFS export

Overview
This section contains an overview of the NFS implementation procedure overview and host requirements for NFS implementation.

Procedure overview

To create a NFS export, you must perform the following tasks: 1. Verify that you have performed the pre-implementation tasks: Create a Powerlink account. Register your Celerra with EMC or your service provider. Install Navisphere Service Taskbar (NST.) Add additional disk array enclosures (DAEs) using the NST (Not available for NX4). 2. Complete the implementation worksheets. 3. Cable additional Celerra ports to your network system. 4. Configure unused or new disks with Navisphere Express. 5. Configure your network by creating a new interface to access the Celerra storage from a host or workstation. 6. Create a file system using a system-defined storage pool. 7. Delete the NFS export created during startup. 8. Create an NFS export from the file system. 9. Configure host access to the NFS export. 10. Configure and test standby relationships

Host requirements for NFS

Software Celerra Network Server version 5.6. For secure NFS using UNIX or Linux-based Kerberos: Sun Enterprise Authentication Mechanism (SEAM) software or Linux KDC running Kerberos version 5
Note: KDCs from other UNIX systems have not been tested.

For secure NFS using Windows-based Kerberos: Windows 2000 or Windows Server 2003 domain
2

Create an NFS export

Create an NFS export

To use secure NFS, the client computer must be running: SunOS version 5.8 or later (Solaris 10 for NFSv4) Linux - kernel 2.4 or later (2.6.12 _ NFSv4 patches for NFSv4) Hummingbird Maestro version 7 or later (EMC recommends version 8); version 9 for NFSv4 AIX 5.3 ML3
Note: Other clients have not been tested.

DNS (Domain Name System) NTP (Network Time Protocol) server


Note: Windows environments require that you configure Celerra in the Active Directory.

Hardware No specific hardware requirements Network No specific network requirements Storage No specific storage requirements

Create an NFS export

Create an NFS export

Pre-implementation tasks
Before you begin this NFS implementation procedure ensure that you have completed the following tasks.

Create a Powerlink account

You can create a Powerlink account at http://Powerlink.EMC.com. Use this website to access additional EMC resources, including documentation, release notes, software updates, information about EMC products, licensing, and service. If you did not register your Celerra at the completion of the Celerra Startup Assistant, you can do so now by downloading the Registration wizard from Powerlink. The Registration wizard can also be found on the Applications and Tools CD that was shipped with your system. Registering your Celerra ensures that EMC Customer Support has all pertinent system and site information so they can properly assist you.

Register your system with EMC

Download and install the Navisphere Service Taskbar (NST) Add additional disk array enclosures

The NST is available for download from the CLARiiON Tools page on Powerlink and on the Applications and Tools CD that was shipped with your system.

Use the NST to add new disk array enclosures (DAEs) to fully implement your Celerra (Not available for NX4).

Create an NFS export

Create an NFS export

Implementation worksheets
Before you begin this implementation procedure take a moment to fill out the following implementation worksheets with the values of the various devices you will need to create.

Create interface worksheet

The New Network Interface wizard configures individual network interfaces for the Data Movers. It can also create virtual network devices: Link Aggregation, Fail-Safe Network, or Ethernet Channel. Use Table 1 to complete the New Network Interface wizard. You will need the following information: Does the network use variable-length subnets? Yes No
Note: If the network uses variable-length subnets, be sure to use the correct subnet mask. Do not assume 255.255.255.0 or other common values.

Table 1

Create interface worksheet Maximum Transmission Unit (MTU) (optional) Virtual LAN (VLAN) identifier (optional)

Data Mover number

Device name or virtual device name

IP address

Netmask

Devices (optional)

Create an NFS export

Create an NFS export

Create file system worksheet

The Create File System step creates a file system on a Data Mover. This step can be repeated as needed to create additional file systems. Volume Management: Automatic (recommended) Storage Pool for Automatic Volume Management: Read/Write Data Mover: server_2 server_3

CLARiiON RAID 1 (Not available for NX4) CLARiiON RAID 5 Performance CLARiiON RAID 5 Economy CLARiiON RAID 1/0 CLARiiON RAID 6
File System Name ____________________________________________ File System Size (megabytes) __________________________________ Use Default User and Group Quotas: Yes

No

Hard Limit for User Storage (megabytes) ____________________ Soft Limit for User Storage (megabytes) _____________________ Hard Limit for User Files (files)_____________________________ Soft Limit for User Files (files) _____________________________ Hard Limit for Group Storage (megabytes) __________________ Soft Limit for Group Storage (megabytes) ___________________ Hard Limit for Group Files (files) ___________________________ Soft Limit for Group Files (files) ____________________________ Enforce Hard Limits: Yes

No

Grace Period for Storage (days)_____________________________ Grace Period for Files (days) _______________________________

NFS export worksheet

NFS export pathname (for example: /test_fs/): __________________________________________________ IP address of client computer:______________________________ When you have completed the Implementation worksheets go to Connect external network cables on page 7.

Create an NFS export

Create an NFS export

Connect external network cables


f you have not already done so, you will need to connect the desired blade network ports to your network system. Figure 1 shows the 4-port copper Ethernet X-blades network ports for an NX4. They are labeled cge0-cge3. Figure 2 on page 7 shows the 2-port copper Ethernet and 2-port optical 10 GbE X-blades network ports for an NX4. They are labeled cge0-cge1, and fxg0-fxg1.

Internal management module

cge0

cge1

cge2

cge3

Com 1 Com 2 BE 0 BE 1 AUX 0 AUX 1


CIP-000560

Figure 1

4-port copper Ethernet X-blade

Internal management module

fxg0

fxg1

cge0

cge1

Com 1 Com 2 BE 0 BE 1 AUX 0 AUX 1


CNS-001256

Figure 2

2-port copper Ethernet and 2-port optical 10 GbE X-blade

Any advanced configuration of the external network ports is beyond the scope of this implementation procedure. For more information about the many network configuration options the Celerra system supports, such as Ethernet channels, link aggregation, and FSNs, refer to the Configuring and Managing Celerra Networking and

Create an NFS export

Create an NFS export

Configuring and Managing Celerra Network High Availability technical modules for more information. When you have finished Connect external network cables go to Configure storage for a Fibre Channel enabled system on page 9.

Create an NFS export

Create an NFS export

Configure storage for a Fibre Channel enabled system


This section details how to create additional storage for a NX4 Fibre Channel enabled storage system using Navisphere Express.

Configure storage with Navisphere Express

Configure storage with Navisphere Express by doing the following: 1. To start Navisphere Express, open an internet browser such as Internet Explorer or Mozilla Firefox. 2. Type the IP address of a storage processor of the storage system into the internet browser address bar.
Note: This IP address is the one that you assigned when you initialized the storage system.

3. Type the user name and password to log in to Navisphere Express, as shown in Figure 3 on page 10.
Note: The default username is nasadmin and the default password is nasadmin.

Create an NFS export

Create an NFS export

Figure 3

Navisphere Express Login screen

4. To configure unused storage, select Disk Pools in the left navigation panel from the initial screen shown in Figure 4 on page 11.

10

Create an NFS export

Create an NFS export

Figure 4

Manage Virtual Disks screen Note: If you are trying to create a new virtual disk (LUN) for Automatic Volume Management (AVM) to use in a stripe with existing virtual disks, the new virtual disk must match the size of the existing virtual disks. Find the information on the existing virtual disks by going to the details page for each virtual disk by selecting Manage > Virtual Disks > <Existing_Virtual_Disk_Name>. Record the MB value of the existing virtual disks and use this value as the size for any new virtual disk.

5. Click Create New Disk Pool, as shown in Figure 5 on page 12.

Create an NFS export

11

Create an NFS export

Figure 5

Manage Disk Pools screen Note: You should create at least two disk pools. The software assigns each disk pool that you create to an SP as follows: Disk Pool 1 to SP A, Disk Pool 2 to SP B, Disk Pool 3 to SP A, Disk Pool 4 to SP B, and so on. All virtual disks that you create on a disk pool are automatically assigned to the same SP as the disk pool. If you create only one disk pool on the storage system, all virtual disks on the storage system are assigned to SP A and all data received, or sent, goes through SP A.

12

Create an NFS export

Create an NFS export

6. Select the RAID group type for the new disk pool, as shown in Figure 6 on page 13. The RAID Group Type values should be applicable to your system. For more information, see NAS Support Matrix document on http://Powerlink.EMC.com.
Note: RAID5 is recommended.

Figure 6

Create Disk Pool screen

7. Select the disks in the Disk Processor Enclosure to include in the new disk pool, as shown in Figure 6.

Create an NFS export

13

Create an NFS export

8. Click Apply. 9. Click Create a virtual disk that can be assigned to a server. 10. Select the disk pool just created, as shown in Figure 6. 11. Type the Name for the new virtual disk(s), and select its Capacity and the Number of Virtual Disks to create, as shown in Figure 7 on page 14.
Note: It is recommended that virtual disk capacity not be larger than 2 TB.

Figure 7

Create Virtual Disks screen

14

Create an NFS export

Create an NFS export

12. Assign a server to the virtual disk(s) by using the Server list box, as shown in Figure 7.
Note: To send data to or receive data from a virtual disk, you must assign a server to the virtual disk.

13. Click Apply to create virtual disk(s).


Note: The system now creates the virtual disks. This may take some time depending on the size of the virtual disks.

14. Select Virtual Disks from the left navigation panel, to verify the creation of the new virtual disk(s). 15. Verify the virtual disk server assignment, by looking under Assigned To on the Manage Virtual Disks page, as shown in Figure 8.

Create an NFS export

15

Create an NFS export

Figure 8

Verify new virtual disk assignment

16. To make the new virtual disks (LUNs) available to the Celerra system, Celerra Manager must be used. Launch the Celerra Manager by opening Celerra Manager using the following URL:
https://<control_station>

where <control_station> is the hostname or IP address of the Control Station. 17. If a security alert appears about the systems security certificate, click Yes to proceed.

16

Create an NFS export

Create an NFS export

18. At the login prompt, log in as user root. The default password is nasadmin. 19. If a security warning appears about the systems security certificate being issued by an untrusted source, click Yes to accept the certificate. 20. If a warning about a hostname mismatch appears, click Yes. 21. On the Celerra > Storage Systems page, click Rescan, as shown in Figure 9 on page 17.

Figure 9

Rescan Storage System in Celerra Manager

Create an NFS export

17

Create an NFS export

CAUTION Do not change the host LUN (virtual disk) identifier of the Celerra LUNs (virtual disks) after rescanning. This may cause data loss or unavailability. 22. The user virtual disks (LUNs) are now available for the Celerra system. When you have finished the Configure storage for a Fibre Channel enabled system go to Configure the network on page 19.

18

Create an NFS export

Create an NFS export

Configure the network


Using Celerra Manager, you can create interfaces on devices that are not part of a virtual device. Host or workstation access to the Celerra storage is configured by creating a network interface.
Note: You cannot create a new interface for a Data Mover while the Data Mover is failed over to its standby.

In Celerra Manager, configure a new network interface and device by doing the following: 1. Log in to Celerra Manager as root. 2. Click Celerras > <Celerra_name> > Wizards. 3. Click New Network Interface wizard to set up a new network interface. This wizard can also be used to create a new virtual device, if desired.
Note: On the Select/Create a network device page, click Create Device to create a new virtual network device. The new virtual device can be configured with one of the following high-availability features: Ethernet Channel, Link Aggregation, or Fail-Safe Network.

When you have completed Configure the network go to Create a file system on page 20.

Create an NFS export

19

Create an NFS export

Create a file system


To create a new file system, do the following steps: 1. Go to Celerras > <Celerra_name> > File Systems tab in the left navigation menu. 2. Click New at the bottom of the File Systems screen, as shown in Figure 10.

Figure 10

File Systems screen

3. Select the Storage Pool radio button to select where the file system will be created from, as shown in Figure 11 on page 21.
20

Create an NFS export

Create an NFS export

Figure 11

Create new file system screen

4. Name the file system. 5. Select the system-defined storage pool from the Storage Pool drop-down menu.
Note: Based on the disks and the RAID types created in the storage system, different system-defined storage pools will appear in the storage pool list. For more information about system-defined storage pools refer to the Disk group and disk volume configurations on page 36.

6. Designate the Storage Capacity of the file system and select any other desired options.

Create an NFS export

21

Create an NFS export

Other file system options are listed below: Auto Extend Enabled: If enabled, the file system automatically extends when the high water mark is reached. Virtual Provisioning Enabled: This option can only be used with automatic file system extension and together they let you grow the file system as needed. File-level Retention (FLR) Capability: If enabled, it is persistently marked as an FLR file system until it is deleted. File systems can be enabled with FLR capability only at creation time. 7. Click Create. The new file system will now appear on the File System screen, as shown in Figure 12.
.

Figure 12

File System screen with new file system

22

Create an NFS export

Create an NFS export

Delete the NFS export created during startup


You may have optionally created a NFS export using the Celerra Startup Assistant (CSA). If you have a minimum configuration of five or less disks, then you can begin to use this share as a production share. If you have more than five disks, delete the NFS export created during startup, by doing: 1. To delete the NFS export created during startup and make the file system unavailable to NFS users on the network: a. Go to Celerras > <Celerra_name> and click the NFS Exports tab. b. Select one or more exports to delete, and click Delete. The Confirm Delete page appears. c. Click OK.

When you have completed Delete the NFS export created during startup go to Create NFS exports on page 24.

Create an NFS export

23

Create an NFS export

Create NFS exports


To create a new NFS export, do the following: 1. Go to Celerras > <Celerra_name> and click the NFS exports tab. 2. Click New, as shown in figure Figure 13.

Figure 13

NFS Exports screen

3. Select a Data Mover that manages the file system from the Choose Data Mover drop-down list on the New NFS export page, as shown in Figure 14 on page 25.

24

Create an NFS export

Create an NFS export

Figure 14

New NFS Export screen

4. Select the file system or checkpoint that contains the directory to export from the File System drop-down list. The list displays the mount point for all file systems and checkpoints mounted on the selected Data Mover.
Note: The Path field displays the mount point of the selected file system. This entry exports the root of the file system. To export a subdirectory, add the rest of the path to the string in the field. You may also delete the contents of this box and enter a new, complete path. This path must already exist.

Create an NFS export

25

Create an NFS export

5. Fill out the Host Access section by defining export permissions for host access to the NFS export.
Note: The IP address with the subnet mask can be entered in dot (.) notation, slash (/) notation, or hexadecimal format. Use colons to separate multiple entries.

Host Access Read-only exports grants read-only access to all hosts with access to this export, except for hosts given explicit read/write access on this page. Read-only Hosts field grants read-only access to the export to the hostnames, IP addresses, netgroup names, or subnets listed in this field. Read/write Hosts field grants read/write access to the export to the hostnames, IP addresses, netgroup names, or subnets listed in this field. 6. Click OK to create the export.
Note: If a file system is created using the command line interface (CLI), it will not be displayed for an NFS export until it is mounted on a Data Mover.

The new NFS export will now appear on the NFS export screen, as shown in Figure 15 on page 27.

26

Create an NFS export

Create an NFS export

Figure 15

NFS export screen with new NFS export

When you have finished Create NFS exports go to Configure hosts on page 28.

Create an NFS export

27

Create an NFS export

Configure hosts
To mount an NFS export you need the source, including the address or the hostname of the server. You can collect these values from the NFS export implementation worksheet. To use this new NFS export on the network do the following: 1. Open a UNIX prompt on the client computer connected to the same subnet as the Celerra system. Use the values found on the NFS worksheet on page 5 to complete this section. 2. Log in as root. 3. Enter the following command at the UNIX prompt to mount the NFS export:
mount <data_mover_IP>:/<fs_export_name> /<mount_point>

4. Change directories to the new export by typing:


cd /<mount_point>

5. Confirm the amount of storage in the export by typing:


df /<mount_point>

For more information about NFS exports please refer to the Configuring NFS on Celerra technical module found at http://Powerlink.EMC.com.

28

Create an NFS export

Create an NFS export

Configure and test standby relationships


EMC recommends that multi-blade Celerra systems be configured with a Primary/Standby blade failover configuration to ensure data availability in the case of a blade (server/Data Mover) fault. Creating a standby blade ensures continuous access to file systems on the Celerra storage system. When a primary blade fails over to a standby, the standby blade assumes the identity and functionality of the failed blade and functions as the primary blade until the faulted blade is healthy and manually failed back to functioning as the primary blade.

Configure a standby relationship

A blade must first be configured as a standby for one or more primary blades for that blade to function as a standby blade, when required. To configure a standby blade: 1. Determine the ideal blade failover configuration for the Celerra system based on site requirements and EMC recommendations. EMC recommends a minimum of one standby blade for up to three Primary blades.

CAUTION The standby blade(s) must have the same network capabilities (NICs and cables) as the primary blades with which it will be associated. This is because the standby blade will assume the faulted primary blades network identity (NIC IP and MAC addresses), storage identity (controlled file systems), and service identity (controlled shares and exports). 2. Define the standby configuration using Celerra Manager following the blade standby configuration recommendation: a. Select <Celerra_name> > Data Movers > <desired_primary_blade> from the left-hand navigation panel. b. On the Data Mover Properties screen, configure the standby blade for the selected primary blade by checking the box of the desired Standby Mover and define the Failover Policy.

Create an NFS export

29

Create an NFS export

Figure 16

Configure a standby in Celerra Manager Note: A failover policy is a predetermined action that the Control Station invokes when it detects a blade failure based on the failover policy type specified. It is recommended that the Failover Policy be set to auto.

c. Click Apply.
Note: The blade configured as standby will now reboot.

30

Create an NFS export

Create an NFS export

d. Repeat for each primary blade in the Primary/Standby configuration.

Test the standby configuration

It is recommended that the functionality of the blade failover configuration be tested prior to the system going into production. When a failover condition occurs, the Celerra is able to transfer functionality from the primary blade to the standby blade without disrupting file system availability For a standby blade to successfully stand-in as a primary blade, the blades must have the same network connections (Ethernet and Fibre Cables), network configurations (EtherChannel, Fail Safe Network, High Availability, and so forth), and switch configuration (VLAN configuration, etc).

CAUTION You must cable the failover blade identically to its primary blade. If configured network ports are left uncabled when a failover occurs, access to files systems will be disrupted. To test the failover configuration, do the following: 1. Open a SSH session to the Control Station with an SSH client like PuTTY using the CS. 2. Log in to the CS as nasadmin. Change to the root user by entering the following command:
su root Note: The default password for root is nasadmin.

3. Collect the current names and types of the system blades:


# nas_server -l

Sample output:
id type 1 1 2 4 acl slot groupID state name 1000 2 0 server_2 1000 3 0 server_3

Note: In the command output above provides the state name, the names of the blades. Also, the type column designates the blade type as 1 (primary) and 4 (standby).

Create an NFS export

31

Create an NFS export

4. After I/O traffic is running on the primary blades network port(s), monitor this traffic by entering:
# server_netstat <server_name> -i

Example:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

**************************************************************************** fxg0 fxg1 mge0 mge1 cge0 cge1 9000 9000 9000 9000 9000 9000 0 0 851321 28714095 614247 0 0 0 0 0 0 0 0 0 812531 1267209 2022 0 0 0 0 0 0 0 0:60:16:32:4a:30 0:60:16:32:4a:31 0:60:16:2c:43:2 0:60:16:2c:43:1 0:60:16:2b:49:12 0:60:16:2b:49:13

5. Manually force a graceful failover of the primary blade to the standby blade by using the following command:
# server_standby <primary_blade> -activate mover

Example:
[nasadmin@rtpplat11cs0 ~]$ server_standby server_2 -activate mover server_2 : server_2 : going offline server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done server_2 : renamed as server_2.faulted.server_3 server_3 : renamed as server_2 Note: This command will rename the primary and standby blades. In the example above, server_2, the primary blade, was rebooted and renamed server_2.faulter.server_3 and server_3 was renamed as server_2.

32

Create an NFS export

Create an NFS export

6. Verify that the failover has completed successfully by: a. Checking that the blades have changed names and types:
# nas_server -l

Sample output:
id type 1 1 2 1 acl slot groupID state name 1000 2 0 server_2.faulted.server_3 1000 3 0 server_2

Note: In the command output above each blades state name has changed and the type column designates both blades as type 1 (primary).

b. Checking I/O traffic is flowing to the primary blade by entering:


# server_netstat <server_name> -i Note: The primary blade, though physically a different blade, retains the initial name.

Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

**************************************************************************** fxg0 9000 0 0 0 0 0:60:16:32:4b:18 fxg1 9000 0 0 0 0 0:60:16:32:4b:19 mge0 9000 14390362 0 786537 0 0:60:16:2c:43:30 mge1 9000 16946 0 3256 0 0:60:16:2c:43:31 cge0 9000 415447 0 3251 0 0:60:16:2b:49:12 cge1 9000 0 0 0 0 0:60:16:2b:48:ad Note: The WWNs in the PhysAddr column have changed, thus reflecting that the failover completed successfully.

7. Verify that the blades appear with reason code 5 by typing;


# /nas/sbin/getreason

Create an NFS export

33

Create an NFS export

8. After the blades appear with reason code 5, manually restore the failed over blade to its primary status by typing the following command:
# server_standby <primary_blade> -restore mover

Example:
server_standby server_2 -restore mover server_2 : server_2 : going standby server_2.faulted.server_3 : going active replace in progress ...done failover activity complete commit in progress (not interruptible)...done server_2 : renamed as server_3 server_2.faulted.server_3 : renamed as server_2 Note: This command will rename the primary and standby blades. In the example above, server_2, the standing primary blade, was rebooted and renamed server_3 and server_2.faulter.server_3 was renamed as server_2.

9. Verify that the failback has completed successfully by: a. Checking that the blades have changed back to the original name and type:
# nas_server -l

Sample output:
id type 1 1 2 4 acl slot groupID state name 1000 2 0 server_2 1000 3 0 server_3

34

Create an NFS export

Create an NFS export

b. Checking I/O traffic flowing to the primary blade by entering:


# server_netstat <server_name> -i

Sample output:
[nasadmin@rtpplat11cs0 ~]$ server_netstat server_2 -i Name Mtu Ibytes Ierror Obytes Oerror PhysAddr

**************************************************************************** fxg0 fxg1 mge0 mge1 cge0 cge1 9000 9000 9000 9000 9000 9000 0 0 851321 28714095 314427 0 0 0 0 0 0 0 0 0 812531 1267209 1324 0 0 0 0 0 0 0 0:60:16:32:4a:30 0:60:16:32:4a:31 0:60:16:2c:43:2 0:60:16:2c:43:1 0:60:16:2b:49:12 0:60:16:2b:49:13

Note: The WWNs in the PhysAddr column have reverted to their original values, thus reflecting that the failback completed successfully.

Refer to the Configuring Standbys on EMC Celerra technical module http://Powerlink.EMC.com for more information about determining and defining blade standby configurations.

Create an NFS export

35

Create an NFS export

Appendix
Disk group and disk volume configurations
Table 2 maps a disk group type to a storage profile, associating the RAID type and the storage space that results in the automatic volume management (AVM) pool. The storage profile name is a set of rules used by AVM to determine what type of disk volumes to use to provide storage for the pool.

Table 2

Disk group and disk volume configurations Disk group type RAID 5 8+1 Attach type Fibre Channel Storage profile clar r5 economy (8+1) clar_r5_performance (4+1) clar_r5_performance clar_r1 clar_r5_performance clar_r1 clar_r6

RAID 5 4+1 RAID 1 RAID 5 4+1 RAID 1 RAID 6 4+2 RAID 6 12+2 RAID 5 6+1 RAID 5 4+1 (CX3 only) RAID 3 4+1 RAID 3 8+1 RAID 6 4+2 RAID 6 12+2 RAID 5 6+1 (CX3 only) RAID 5 4+1 (CX3 only) RAID 3 4+1 RAID 3 8+1

Fibre Channel

Fibre Channel Fibre Channel Fibre Channel

ATA ATA

clarata_archive clarata_archive

ATA

clarata_r3

ATA

clarata_r6

LCFC

clarata_archive

LCFC

clarata_archive

LCFC

clarata_r3

36

Create an NFS export

Create an NFS export

Table 2

Disk group and disk volume configurations (continued) Disk group type RAID 6 4+2 RAID 6 12+2 RAID 5 2+1 RAID 5 3+1 RAID 5 4+1 RAID 5 5+1 RAID 1/0 (2 disks) RAID 6 4+2 RAID 5 2+1 RAID 5 3+1 RAID 5 4+1 RAID 5 5+1 RAID 1/0 (2 disks) RAID 6 4+2 Attach type LCFC Storage profile clarata_r6

SATA SATA SATA SATA SATA SATA SAS SAS SAS SAS SAS SAS

clarata_archive clarata_archive clarata_archive clarata_archive clarata_r10 clarata_r6 clarsas_archive clarsas_archive clarsas_archive clarsas_archive clarsas_r10 clarsas_r6

Create an NFS export

37

Create an NFS export

38

Create an NFS export